Test Report: Docker_Linux_docker_arm64 19529

                    
                      d7f9f66bdcb95e27f1005d5ce9d414c92a72aaf8:2024-08-28:35983
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 74.63
x
+
TestAddons/parallel/Registry (74.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 5.241651ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-2d9gq" [c7dd58ff-e9b5-4511-9a22-023705b9fdfe] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00375763s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8svd4" [c0749a82-4329-4dc6-92f9-0bd490e250bc] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004003839s
addons_test.go:342: (dbg) Run:  kubectl --context addons-161312 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-161312 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-161312 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.126021339s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-161312 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 ip
2024/08/28 17:05:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-161312
helpers_test.go:235: (dbg) docker inspect addons-161312:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e",
	        "Created": "2024-08-28T16:52:03.925678699Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8846,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-28T16:52:04.120703728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cc8dc59c2b679153d99f84cc70dab3e87225f8a0d04f61969b54714a9c4cd4d",
	        "ResolvConfPath": "/var/lib/docker/containers/d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e/hosts",
	        "LogPath": "/var/lib/docker/containers/d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e/d0d17dedb03f458c68fdc4af2972292aa4ef3d26a79f0a2bcff94f606c023b8e-json.log",
	        "Name": "/addons-161312",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-161312:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-161312",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/982f7ea35e30b02f6cc0e1b094fc8906c52d6129d738331ac8b32ea0907c0156-init/diff:/var/lib/docker/overlay2/c18b9d3934b1670f096f7301a8e8724fdff2e22642728bcfca597c0633025683/diff",
	                "MergedDir": "/var/lib/docker/overlay2/982f7ea35e30b02f6cc0e1b094fc8906c52d6129d738331ac8b32ea0907c0156/merged",
	                "UpperDir": "/var/lib/docker/overlay2/982f7ea35e30b02f6cc0e1b094fc8906c52d6129d738331ac8b32ea0907c0156/diff",
	                "WorkDir": "/var/lib/docker/overlay2/982f7ea35e30b02f6cc0e1b094fc8906c52d6129d738331ac8b32ea0907c0156/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-161312",
	                "Source": "/var/lib/docker/volumes/addons-161312/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-161312",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-161312",
	                "name.minikube.sigs.k8s.io": "addons-161312",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c0db1cffe61dacfb82935168bf6114819f7a3006d7a9f4dd00069c1383acf367",
	            "SandboxKey": "/var/run/docker/netns/c0db1cffe61d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-161312": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "31812af31d83671e38b71e7c104f91ddb3ac10c99e30160d02416dbcffc4b1aa",
	                    "EndpointID": "554a67bcbe376898a0cfb0bae8452ebb9fea0a0b25de82bad87e5f732f3cd09d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-161312",
	                        "d0d17dedb03f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-161312 -n addons-161312
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-161312 logs -n 25: (1.515036083s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-224586   | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p download-only-224586                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p download-only-224586                                                                     | download-only-224586   | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| start   | -o=json --download-only                                                                     | download-only-427986   | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p download-only-427986                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p download-only-427986                                                                     | download-only-427986   | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p download-only-224586                                                                     | download-only-224586   | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p download-only-427986                                                                     | download-only-427986   | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| start   | --download-only -p                                                                          | download-docker-651207 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | download-docker-651207                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-651207                                                                   | download-docker-651207 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-834196   | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | binary-mirror-834196                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40931                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-834196                                                                     | binary-mirror-834196   | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| addons  | enable dashboard -p                                                                         | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | addons-161312                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | addons-161312                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-161312 --wait=true                                                                | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:55 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-161312 addons disable                                                                | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 16:55 UTC | 28 Aug 24 16:56 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-161312 addons disable                                                                | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 17:04 UTC | 28 Aug 24 17:04 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-161312 addons                                                                        | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 17:04 UTC | 28 Aug 24 17:05 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-161312 addons                                                                        | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
	|         | -p addons-161312                                                                            |                        |         |         |                     |                     |
	| ip      | addons-161312 ip                                                                            | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
	| addons  | addons-161312 addons disable                                                                | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-161312 ssh cat                                                                       | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
	|         | /opt/local-path-provisioner/pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-161312 addons disable                                                                | addons-161312          | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 16:51:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 16:51:37.905156    8351 out.go:345] Setting OutFile to fd 1 ...
	I0828 16:51:37.905310    8351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:37.905338    8351 out.go:358] Setting ErrFile to fd 2...
	I0828 16:51:37.905344    8351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:37.905607    8351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
	I0828 16:51:37.906075    8351 out.go:352] Setting JSON to false
	I0828 16:51:37.906872    8351 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2045,"bootTime":1724861853,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0828 16:51:37.906944    8351 start.go:139] virtualization:  
	I0828 16:51:37.910242    8351 out.go:177] * [addons-161312] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0828 16:51:37.913888    8351 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 16:51:37.913934    8351 notify.go:220] Checking for updates...
	I0828 16:51:37.919249    8351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 16:51:37.921854    8351 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig
	I0828 16:51:37.924407    8351 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube
	I0828 16:51:37.927053    8351 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0828 16:51:37.929835    8351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 16:51:37.932679    8351 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 16:51:37.955873    8351 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 16:51:37.955994    8351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 16:51:38.013913    8351 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-28 16:51:38.005032488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 16:51:38.014024    8351 docker.go:307] overlay module found
	I0828 16:51:38.016988    8351 out.go:177] * Using the docker driver based on user configuration
	I0828 16:51:38.019445    8351 start.go:297] selected driver: docker
	I0828 16:51:38.019466    8351 start.go:901] validating driver "docker" against <nil>
	I0828 16:51:38.019481    8351 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 16:51:38.020107    8351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 16:51:38.094224    8351 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-28 16:51:38.084433705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 16:51:38.094423    8351 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 16:51:38.094660    8351 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 16:51:38.097336    8351 out.go:177] * Using Docker driver with root privileges
	I0828 16:51:38.100097    8351 cni.go:84] Creating CNI manager for ""
	I0828 16:51:38.100139    8351 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 16:51:38.100152    8351 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 16:51:38.100277    8351 start.go:340] cluster config:
	{Name:addons-161312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-161312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:51:38.103262    8351 out.go:177] * Starting "addons-161312" primary control-plane node in "addons-161312" cluster
	I0828 16:51:38.106067    8351 cache.go:121] Beginning downloading kic base image for docker with docker
	I0828 16:51:38.109137    8351 out.go:177] * Pulling base image v0.0.44-1724775115-19521 ...
	I0828 16:51:38.111748    8351 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 16:51:38.111831    8351 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 16:51:38.111845    8351 cache.go:56] Caching tarball of preloaded images
	I0828 16:51:38.111846    8351 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0828 16:51:38.111940    8351 preload.go:172] Found /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0828 16:51:38.111951    8351 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 16:51:38.112301    8351 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/config.json ...
	I0828 16:51:38.112418    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/config.json: {Name:mk19f9d2d3e637445941a22572c01984315af055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:51:38.128160    8351 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 16:51:38.128291    8351 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0828 16:51:38.128320    8351 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0828 16:51:38.128326    8351 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0828 16:51:38.128333    8351 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0828 16:51:38.128341    8351 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from local cache
	I0828 16:51:55.502629    8351 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from cached tarball
	I0828 16:51:55.502667    8351 cache.go:194] Successfully downloaded all kic artifacts
	I0828 16:51:55.502708    8351 start.go:360] acquireMachinesLock for addons-161312: {Name:mk377363816433b11c915784309a449f180b325a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 16:51:55.502832    8351 start.go:364] duration metric: took 101.936µs to acquireMachinesLock for "addons-161312"
	I0828 16:51:55.502863    8351 start.go:93] Provisioning new machine with config: &{Name:addons-161312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-161312 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 16:51:55.502947    8351 start.go:125] createHost starting for "" (driver="docker")
	I0828 16:51:55.505360    8351 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0828 16:51:55.505647    8351 start.go:159] libmachine.API.Create for "addons-161312" (driver="docker")
	I0828 16:51:55.505696    8351 client.go:168] LocalClient.Create starting
	I0828 16:51:55.505862    8351 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem
	I0828 16:51:56.608957    8351 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/cert.pem
	I0828 16:51:57.317898    8351 cli_runner.go:164] Run: docker network inspect addons-161312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0828 16:51:57.333999    8351 cli_runner.go:211] docker network inspect addons-161312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0828 16:51:57.334094    8351 network_create.go:284] running [docker network inspect addons-161312] to gather additional debugging logs...
	I0828 16:51:57.334117    8351 cli_runner.go:164] Run: docker network inspect addons-161312
	W0828 16:51:57.349754    8351 cli_runner.go:211] docker network inspect addons-161312 returned with exit code 1
	I0828 16:51:57.349786    8351 network_create.go:287] error running [docker network inspect addons-161312]: docker network inspect addons-161312: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-161312 not found
	I0828 16:51:57.349802    8351 network_create.go:289] output of [docker network inspect addons-161312]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-161312 not found
	
	** /stderr **
	I0828 16:51:57.349897    8351 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0828 16:51:57.364728    8351 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400176f090}
	I0828 16:51:57.364775    8351 network_create.go:124] attempt to create docker network addons-161312 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0828 16:51:57.364833    8351 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-161312 addons-161312
	I0828 16:51:57.442845    8351 network_create.go:108] docker network addons-161312 192.168.49.0/24 created
	I0828 16:51:57.442875    8351 kic.go:121] calculated static IP "192.168.49.2" for the "addons-161312" container
	I0828 16:51:57.442946    8351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0828 16:51:57.461226    8351 cli_runner.go:164] Run: docker volume create addons-161312 --label name.minikube.sigs.k8s.io=addons-161312 --label created_by.minikube.sigs.k8s.io=true
	I0828 16:51:57.480294    8351 oci.go:103] Successfully created a docker volume addons-161312
	I0828 16:51:57.480384    8351 cli_runner.go:164] Run: docker run --rm --name addons-161312-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-161312 --entrypoint /usr/bin/test -v addons-161312:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib
	I0828 16:51:59.589271    8351 cli_runner.go:217] Completed: docker run --rm --name addons-161312-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-161312 --entrypoint /usr/bin/test -v addons-161312:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib: (2.108845177s)
	I0828 16:51:59.589301    8351 oci.go:107] Successfully prepared a docker volume addons-161312
	I0828 16:51:59.589326    8351 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 16:51:59.589345    8351 kic.go:194] Starting extracting preloaded images to volume ...
	I0828 16:51:59.589424    8351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-161312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir
	I0828 16:52:03.862134    8351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-161312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir: (4.272655839s)
	I0828 16:52:03.862164    8351 kic.go:203] duration metric: took 4.272816243s to extract preloaded images to volume ...
	W0828 16:52:03.862307    8351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0828 16:52:03.862438    8351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0828 16:52:03.910797    8351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-161312 --name addons-161312 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-161312 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-161312 --network addons-161312 --ip 192.168.49.2 --volume addons-161312:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce
	I0828 16:52:04.289431    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Running}}
	I0828 16:52:04.314235    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:04.338769    8351 cli_runner.go:164] Run: docker exec addons-161312 stat /var/lib/dpkg/alternatives/iptables
	I0828 16:52:04.421474    8351 oci.go:144] the created container "addons-161312" has a running status.
	I0828 16:52:04.421504    8351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa...
	I0828 16:52:04.758403    8351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0828 16:52:04.783143    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:04.826922    8351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0828 16:52:04.826943    8351 kic_runner.go:114] Args: [docker exec --privileged addons-161312 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0828 16:52:04.924579    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:04.954310    8351 machine.go:93] provisionDockerMachine start ...
	I0828 16:52:04.954407    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:05.004743    8351 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:05.005006    8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0828 16:52:05.005015    8351 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 16:52:05.170512    8351 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-161312
	
	I0828 16:52:05.170554    8351 ubuntu.go:169] provisioning hostname "addons-161312"
	I0828 16:52:05.170661    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:05.189110    8351 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:05.189411    8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0828 16:52:05.189426    8351 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-161312 && echo "addons-161312" | sudo tee /etc/hostname
	I0828 16:52:05.343132    8351 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-161312
	
	I0828 16:52:05.343253    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:05.370686    8351 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:05.370921    8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0828 16:52:05.370937    8351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-161312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-161312/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-161312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 16:52:05.507534    8351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 16:52:05.507560    8351 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19529-2268/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-2268/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-2268/.minikube}
	I0828 16:52:05.507591    8351 ubuntu.go:177] setting up certificates
	I0828 16:52:05.507600    8351 provision.go:84] configureAuth start
	I0828 16:52:05.507667    8351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-161312
	I0828 16:52:05.524897    8351 provision.go:143] copyHostCerts
	I0828 16:52:05.524988    8351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-2268/.minikube/ca.pem (1078 bytes)
	I0828 16:52:05.525118    8351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-2268/.minikube/cert.pem (1123 bytes)
	I0828 16:52:05.525194    8351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-2268/.minikube/key.pem (1675 bytes)
	I0828 16:52:05.525249    8351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-2268/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca-key.pem org=jenkins.addons-161312 san=[127.0.0.1 192.168.49.2 addons-161312 localhost minikube]
	I0828 16:52:05.956600    8351 provision.go:177] copyRemoteCerts
	I0828 16:52:05.956665    8351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 16:52:05.956705    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:05.973956    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:06.073367    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 16:52:06.098902    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0828 16:52:06.125727    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 16:52:06.151800    8351 provision.go:87] duration metric: took 644.186777ms to configureAuth
	I0828 16:52:06.151828    8351 ubuntu.go:193] setting minikube options for container-runtime
	I0828 16:52:06.152031    8351 config.go:182] Loaded profile config "addons-161312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 16:52:06.152094    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:06.168797    8351 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:06.169071    8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0828 16:52:06.169092    8351 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0828 16:52:06.303814    8351 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0828 16:52:06.303833    8351 ubuntu.go:71] root file system type: overlay
	I0828 16:52:06.303949    8351 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0828 16:52:06.304021    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:06.321476    8351 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:06.321728    8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0828 16:52:06.321810    8351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0828 16:52:06.473316    8351 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0828 16:52:06.473429    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:06.491588    8351 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:06.491844    8351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0828 16:52:06.491866    8351 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0828 16:52:07.263521    8351 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-08-12 11:49:05.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-08-28 16:52:06.466932266 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0828 16:52:07.263607    8351 machine.go:96] duration metric: took 2.309272218s to provisionDockerMachine
	I0828 16:52:07.263635    8351 client.go:171] duration metric: took 11.757925354s to LocalClient.Create
	I0828 16:52:07.263689    8351 start.go:167] duration metric: took 11.75804484s to libmachine.API.Create "addons-161312"
	I0828 16:52:07.263740    8351 start.go:293] postStartSetup for "addons-161312" (driver="docker")
	I0828 16:52:07.263767    8351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 16:52:07.263866    8351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 16:52:07.263934    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:07.280743    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:07.377135    8351 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 16:52:07.380663    8351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0828 16:52:07.380698    8351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0828 16:52:07.380711    8351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0828 16:52:07.380739    8351 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0828 16:52:07.380755    8351 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-2268/.minikube/addons for local assets ...
	I0828 16:52:07.380856    8351 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-2268/.minikube/files for local assets ...
	I0828 16:52:07.380884    8351 start.go:296] duration metric: took 117.122488ms for postStartSetup
	I0828 16:52:07.381219    8351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-161312
	I0828 16:52:07.397829    8351 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/config.json ...
	I0828 16:52:07.398118    8351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 16:52:07.398172    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:07.415436    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:07.507976    8351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0828 16:52:07.512492    8351 start.go:128] duration metric: took 12.009522513s to createHost
	I0828 16:52:07.512514    8351 start.go:83] releasing machines lock for "addons-161312", held for 12.009669583s
	I0828 16:52:07.512586    8351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-161312
	I0828 16:52:07.529538    8351 ssh_runner.go:195] Run: cat /version.json
	I0828 16:52:07.529557    8351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 16:52:07.529593    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:07.529618    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:07.547836    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:07.548565    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:07.785303    8351 ssh_runner.go:195] Run: systemctl --version
	I0828 16:52:07.789741    8351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0828 16:52:07.794137    8351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0828 16:52:07.824172    8351 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0828 16:52:07.824308    8351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 16:52:07.853713    8351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0828 16:52:07.853792    8351 start.go:495] detecting cgroup driver to use...
	I0828 16:52:07.853841    8351 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0828 16:52:07.853963    8351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 16:52:07.870786    8351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0828 16:52:07.881374    8351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0828 16:52:07.891392    8351 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0828 16:52:07.891464    8351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0828 16:52:07.902034    8351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 16:52:07.912886    8351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0828 16:52:07.923036    8351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 16:52:07.933344    8351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 16:52:07.942854    8351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0828 16:52:07.954110    8351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0828 16:52:07.964137    8351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0828 16:52:07.974082    8351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 16:52:07.983000    8351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 16:52:07.991928    8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:08.093805    8351 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0828 16:52:08.203511    8351 start.go:495] detecting cgroup driver to use...
	I0828 16:52:08.203578    8351 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0828 16:52:08.203653    8351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0828 16:52:08.221185    8351 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0828 16:52:08.221302    8351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 16:52:08.233770    8351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 16:52:08.251409    8351 ssh_runner.go:195] Run: which cri-dockerd
	I0828 16:52:08.255212    8351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0828 16:52:08.265000    8351 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0828 16:52:08.285824    8351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0828 16:52:08.393852    8351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0828 16:52:08.486129    8351 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0828 16:52:08.486308    8351 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0828 16:52:08.505926    8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:08.604255    8351 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0828 16:52:08.878379    8351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0828 16:52:08.891001    8351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 16:52:08.904114    8351 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0828 16:52:08.999635    8351 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0828 16:52:09.104457    8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:09.196709    8351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0828 16:52:09.211512    8351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 16:52:09.223761    8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:09.316587    8351 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0828 16:52:09.400754    8351 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0828 16:52:09.400845    8351 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0828 16:52:09.404971    8351 start.go:563] Will wait 60s for crictl version
	I0828 16:52:09.405078    8351 ssh_runner.go:195] Run: which crictl
	I0828 16:52:09.408680    8351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 16:52:09.447652    8351 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0828 16:52:09.447762    8351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 16:52:09.469254    8351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 16:52:09.494723    8351 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0828 16:52:09.494848    8351 cli_runner.go:164] Run: docker network inspect addons-161312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0828 16:52:09.510923    8351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0828 16:52:09.514879    8351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 16:52:09.526173    8351 kubeadm.go:883] updating cluster {Name:addons-161312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-161312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 16:52:09.526302    8351 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 16:52:09.526363    8351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 16:52:09.545258    8351 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0828 16:52:09.545279    8351 docker.go:615] Images already preloaded, skipping extraction
	I0828 16:52:09.545346    8351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 16:52:09.563624    8351 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0828 16:52:09.563650    8351 cache_images.go:84] Images are preloaded, skipping loading
	I0828 16:52:09.563677    8351 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0828 16:52:09.563783    8351 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-161312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-161312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 16:52:09.563855    8351 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0828 16:52:09.613500    8351 cni.go:84] Creating CNI manager for ""
	I0828 16:52:09.613524    8351 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 16:52:09.613534    8351 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 16:52:09.613552    8351 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-161312 NodeName:addons-161312 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 16:52:09.613703    8351 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-161312"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 16:52:09.613777    8351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 16:52:09.624345    8351 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 16:52:09.624416    8351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 16:52:09.633257    8351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0828 16:52:09.654397    8351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 16:52:09.673800    8351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0828 16:52:09.692388    8351 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0828 16:52:09.695865    8351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 16:52:09.707257    8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:09.789493    8351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 16:52:09.803549    8351 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312 for IP: 192.168.49.2
	I0828 16:52:09.803567    8351 certs.go:194] generating shared ca certs ...
	I0828 16:52:09.803585    8351 certs.go:226] acquiring lock for ca certs: {Name:mk4271d0c0edfadb28da5225f3695d190103a80c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:09.803716    8351 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-2268/.minikube/ca.key
	I0828 16:52:10.200053    8351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-2268/.minikube/ca.crt ...
	I0828 16:52:10.200089    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/ca.crt: {Name:mkf3724c4bba2c3d496e6bccd2159bfc8c93663f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:10.200324    8351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-2268/.minikube/ca.key ...
	I0828 16:52:10.200336    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/ca.key: {Name:mk31c6a00d734d5c3c2cef1983b97aeef28d7e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:10.200416    8351 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.key
	I0828 16:52:10.377737    8351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.crt ...
	I0828 16:52:10.377766    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.crt: {Name:mkb7d2fe42c83c663df3c323544682df706cfa10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:10.377946    8351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.key ...
	I0828 16:52:10.377959    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.key: {Name:mkde1bfa5876cc86e88933dc4f11e26338aec186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:10.378044    8351 certs.go:256] generating profile certs ...
	I0828 16:52:10.378107    8351 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.key
	I0828 16:52:10.378126    8351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt with IP's: []
	I0828 16:52:10.668514    8351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt ...
	I0828 16:52:10.668546    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: {Name:mk0bf75a68352223126840db807ae3de1785496f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:10.668761    8351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.key ...
	I0828 16:52:10.668776    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.key: {Name:mke5437844e3e8336640394824ad2200149c1ecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:10.668901    8351 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key.8de113c2
	I0828 16:52:10.668924    8351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt.8de113c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0828 16:52:11.051012    8351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt.8de113c2 ...
	I0828 16:52:11.051047    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt.8de113c2: {Name:mkc6a19c67206bdf37dd83ab3e556e81ed6bab1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:11.051240    8351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key.8de113c2 ...
	I0828 16:52:11.051257    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key.8de113c2: {Name:mkc5d85cd28a5e2d26534e1c655c148fbbccba54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:11.051360    8351 certs.go:381] copying /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt.8de113c2 -> /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt
	I0828 16:52:11.051448    8351 certs.go:385] copying /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key.8de113c2 -> /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key
	I0828 16:52:11.051503    8351 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.key
	I0828 16:52:11.051527    8351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.crt with IP's: []
	I0828 16:52:11.226673    8351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.crt ...
	I0828 16:52:11.226703    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.crt: {Name:mk568c76b92e501f69dd6bbe51c69bbf287935ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:11.226869    8351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.key ...
	I0828 16:52:11.226890    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.key: {Name:mk5718a55b649ac4323a7c85cd30a6e29a7704f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:11.227065    8351 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca-key.pem (1679 bytes)
	I0828 16:52:11.227107    8351 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/ca.pem (1078 bytes)
	I0828 16:52:11.227134    8351 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/cert.pem (1123 bytes)
	I0828 16:52:11.227164    8351 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-2268/.minikube/certs/key.pem (1675 bytes)
	I0828 16:52:11.227793    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 16:52:11.254064    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0828 16:52:11.279495    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 16:52:11.304326    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0828 16:52:11.330904    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0828 16:52:11.360132    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 16:52:11.391428    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 16:52:11.418601    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 16:52:11.447434    8351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-2268/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 16:52:11.472808    8351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 16:52:11.491782    8351 ssh_runner.go:195] Run: openssl version
	I0828 16:52:11.497467    8351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 16:52:11.507661    8351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:11.511289    8351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:11.511428    8351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:11.518696    8351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 16:52:11.528325    8351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 16:52:11.532016    8351 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 16:52:11.532060    8351 kubeadm.go:392] StartCluster: {Name:addons-161312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-161312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:52:11.532193    8351 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 16:52:11.550087    8351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 16:52:11.559062    8351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 16:52:11.568438    8351 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0828 16:52:11.568503    8351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 16:52:11.578976    8351 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 16:52:11.578994    8351 kubeadm.go:157] found existing configuration files:
	
	I0828 16:52:11.579045    8351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 16:52:11.587614    8351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 16:52:11.587676    8351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 16:52:11.595871    8351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 16:52:11.605211    8351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 16:52:11.605284    8351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 16:52:11.613630    8351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 16:52:11.622499    8351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 16:52:11.622591    8351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 16:52:11.631247    8351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 16:52:11.639958    8351 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 16:52:11.640046    8351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 16:52:11.648441    8351 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0828 16:52:11.688977    8351 kubeadm.go:310] W0828 16:52:11.688316    1805 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 16:52:11.691165    8351 kubeadm.go:310] W0828 16:52:11.690587    1805 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 16:52:11.714959    8351 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0828 16:52:11.774326    8351 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 16:52:30.136239    8351 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 16:52:30.136351    8351 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 16:52:30.136469    8351 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0828 16:52:30.136543    8351 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0828 16:52:30.136605    8351 kubeadm.go:310] OS: Linux
	I0828 16:52:30.136672    8351 kubeadm.go:310] CGROUPS_CPU: enabled
	I0828 16:52:30.136746    8351 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0828 16:52:30.136815    8351 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0828 16:52:30.136889    8351 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0828 16:52:30.136957    8351 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0828 16:52:30.137032    8351 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0828 16:52:30.137095    8351 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0828 16:52:30.137168    8351 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0828 16:52:30.137237    8351 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0828 16:52:30.137332    8351 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 16:52:30.137450    8351 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 16:52:30.137567    8351 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 16:52:30.137646    8351 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 16:52:30.141948    8351 out.go:235]   - Generating certificates and keys ...
	I0828 16:52:30.142128    8351 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 16:52:30.142232    8351 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 16:52:30.142335    8351 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 16:52:30.142445    8351 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 16:52:30.142528    8351 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 16:52:30.142591    8351 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 16:52:30.142652    8351 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 16:52:30.142778    8351 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-161312 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0828 16:52:30.142842    8351 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 16:52:30.142961    8351 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-161312 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0828 16:52:30.143036    8351 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 16:52:30.143147    8351 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 16:52:30.143221    8351 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 16:52:30.143277    8351 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 16:52:30.143562    8351 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 16:52:30.143632    8351 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 16:52:30.143703    8351 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 16:52:30.143833    8351 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 16:52:30.143898    8351 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 16:52:30.144000    8351 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 16:52:30.144097    8351 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 16:52:30.148080    8351 out.go:235]   - Booting up control plane ...
	I0828 16:52:30.148201    8351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 16:52:30.148294    8351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 16:52:30.148366    8351 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 16:52:30.148531    8351 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 16:52:30.148631    8351 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 16:52:30.148674    8351 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 16:52:30.148808    8351 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 16:52:30.148917    8351 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 16:52:30.148981    8351 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.501767605s
	I0828 16:52:30.149056    8351 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 16:52:30.149117    8351 kubeadm.go:310] [api-check] The API server is healthy after 7.001934983s
	I0828 16:52:30.149223    8351 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 16:52:30.149346    8351 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 16:52:30.149406    8351 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 16:52:30.149582    8351 kubeadm.go:310] [mark-control-plane] Marking the node addons-161312 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 16:52:30.149641    8351 kubeadm.go:310] [bootstrap-token] Using token: ny7wfw.cf9xojta6jouq4ye
	I0828 16:52:30.153185    8351 out.go:235]   - Configuring RBAC rules ...
	I0828 16:52:30.153340    8351 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 16:52:30.153426    8351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 16:52:30.153564    8351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 16:52:30.153699    8351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 16:52:30.153817    8351 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 16:52:30.153903    8351 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 16:52:30.154019    8351 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 16:52:30.154064    8351 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 16:52:30.154111    8351 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 16:52:30.154119    8351 kubeadm.go:310] 
	I0828 16:52:30.154176    8351 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 16:52:30.154186    8351 kubeadm.go:310] 
	I0828 16:52:30.154278    8351 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 16:52:30.154284    8351 kubeadm.go:310] 
	I0828 16:52:30.154310    8351 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 16:52:30.154371    8351 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 16:52:30.154423    8351 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 16:52:30.154438    8351 kubeadm.go:310] 
	I0828 16:52:30.154493    8351 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 16:52:30.154506    8351 kubeadm.go:310] 
	I0828 16:52:30.154553    8351 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 16:52:30.154560    8351 kubeadm.go:310] 
	I0828 16:52:30.154612    8351 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 16:52:30.154688    8351 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 16:52:30.154758    8351 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 16:52:30.154763    8351 kubeadm.go:310] 
	I0828 16:52:30.154844    8351 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 16:52:30.154918    8351 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 16:52:30.154924    8351 kubeadm.go:310] 
	I0828 16:52:30.155005    8351 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ny7wfw.cf9xojta6jouq4ye \
	I0828 16:52:30.155110    8351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:32297ca7f0abb6eea50ed3c14eaeba642f0933631e0d91616c2b0d22f9e1a84c \
	I0828 16:52:30.155137    8351 kubeadm.go:310] 	--control-plane 
	I0828 16:52:30.155141    8351 kubeadm.go:310] 
	I0828 16:52:30.155223    8351 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 16:52:30.155230    8351 kubeadm.go:310] 
	I0828 16:52:30.155367    8351 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ny7wfw.cf9xojta6jouq4ye \
	I0828 16:52:30.155520    8351 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:32297ca7f0abb6eea50ed3c14eaeba642f0933631e0d91616c2b0d22f9e1a84c 
	I0828 16:52:30.155548    8351 cni.go:84] Creating CNI manager for ""
	I0828 16:52:30.155563    8351 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 16:52:30.159046    8351 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 16:52:30.161089    8351 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 16:52:30.173400    8351 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 16:52:30.207426    8351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 16:52:30.207634    8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-161312 minikube.k8s.io/updated_at=2024_08_28T16_52_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=addons-161312 minikube.k8s.io/primary=true
	I0828 16:52:30.207693    8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:30.217266    8351 ops.go:34] apiserver oom_adj: -16
	I0828 16:52:30.328954    8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:30.829886    8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:31.329058    8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:31.829802    8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:32.329588    8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:32.829039    8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:33.329140    8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:33.829639    8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:34.329700    8351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:34.464408    8351 kubeadm.go:1113] duration metric: took 4.256866605s to wait for elevateKubeSystemPrivileges
	I0828 16:52:34.464440    8351 kubeadm.go:394] duration metric: took 22.932382063s to StartCluster
	I0828 16:52:34.464457    8351 settings.go:142] acquiring lock: {Name:mke1e724d192d07afd5e039ebae8b3217691ebf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:34.464570    8351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-2268/kubeconfig
	I0828 16:52:34.464984    8351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/kubeconfig: {Name:mk783f27e67c290c3cb897056b28951084501c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:34.465181    8351 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 16:52:34.465290    8351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0828 16:52:34.465586    8351 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0828 16:52:34.465681    8351 config.go:182] Loaded profile config "addons-161312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 16:52:34.465721    8351 addons.go:69] Setting default-storageclass=true in profile "addons-161312"
	I0828 16:52:34.465774    8351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-161312"
	I0828 16:52:34.465707    8351 addons.go:69] Setting yakd=true in profile "addons-161312"
	I0828 16:52:34.465894    8351 addons.go:234] Setting addon yakd=true in "addons-161312"
	I0828 16:52:34.465938    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.466122    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.466565    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.466949    8351 addons.go:69] Setting gcp-auth=true in profile "addons-161312"
	I0828 16:52:34.466989    8351 mustload.go:65] Loading cluster: addons-161312
	I0828 16:52:34.467162    8351 config.go:182] Loaded profile config "addons-161312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 16:52:34.467432    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.469623    8351 addons.go:69] Setting ingress=true in profile "addons-161312"
	I0828 16:52:34.469663    8351 addons.go:234] Setting addon ingress=true in "addons-161312"
	I0828 16:52:34.469703    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.470299    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.465714    8351 addons.go:69] Setting cloud-spanner=true in profile "addons-161312"
	I0828 16:52:34.471650    8351 addons.go:234] Setting addon cloud-spanner=true in "addons-161312"
	I0828 16:52:34.471695    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.465718    8351 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-161312"
	I0828 16:52:34.471880    8351 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-161312"
	I0828 16:52:34.471904    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.471989    8351 addons.go:69] Setting ingress-dns=true in profile "addons-161312"
	I0828 16:52:34.472011    8351 addons.go:234] Setting addon ingress-dns=true in "addons-161312"
	I0828 16:52:34.472038    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.472152    8351 addons.go:69] Setting inspektor-gadget=true in profile "addons-161312"
	I0828 16:52:34.472168    8351 addons.go:234] Setting addon inspektor-gadget=true in "addons-161312"
	I0828 16:52:34.472183    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.472324    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.472597    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.473983    8351 out.go:177] * Verifying Kubernetes components...
	I0828 16:52:34.477315    8351 addons.go:69] Setting metrics-server=true in profile "addons-161312"
	I0828 16:52:34.477355    8351 addons.go:234] Setting addon metrics-server=true in "addons-161312"
	I0828 16:52:34.477392    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.477849    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.488638    8351 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-161312"
	I0828 16:52:34.488683    8351 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-161312"
	I0828 16:52:34.488719    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.489162    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.491929    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.515397    8351 addons.go:69] Setting registry=true in profile "addons-161312"
	I0828 16:52:34.515443    8351 addons.go:234] Setting addon registry=true in "addons-161312"
	I0828 16:52:34.515492    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.515950    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.519833    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.532833    8351 addons.go:69] Setting storage-provisioner=true in profile "addons-161312"
	I0828 16:52:34.533439    8351 addons.go:234] Setting addon storage-provisioner=true in "addons-161312"
	I0828 16:52:34.533504    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.534412    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.538382    8351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:34.565998    8351 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-161312"
	I0828 16:52:34.566102    8351 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-161312"
	I0828 16:52:34.566555    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.615272    8351 addons.go:69] Setting volcano=true in profile "addons-161312"
	I0828 16:52:34.623799    8351 addons.go:234] Setting addon volcano=true in "addons-161312"
	I0828 16:52:34.623850    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.624304    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.628970    8351 addons.go:234] Setting addon default-storageclass=true in "addons-161312"
	I0828 16:52:34.629012    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.629430    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.638121    8351 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0828 16:52:34.639802    8351 addons.go:69] Setting volumesnapshots=true in profile "addons-161312"
	I0828 16:52:34.639879    8351 addons.go:234] Setting addon volumesnapshots=true in "addons-161312"
	I0828 16:52:34.639922    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.640394    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.642142    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.642299    8351 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0828 16:52:34.650980    8351 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0828 16:52:34.653737    8351 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0828 16:52:34.653813    8351 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0828 16:52:34.654136    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.655340    8351 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0828 16:52:34.655913    8351 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0828 16:52:34.657580    8351 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0828 16:52:34.657645    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0828 16:52:34.657743    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.659563    8351 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 16:52:34.659584    8351 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 16:52:34.659646    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.687950    8351 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0828 16:52:34.688024    8351 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0828 16:52:34.688130    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.717594    8351 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:52:34.720290    8351 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:52:34.723404    8351 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 16:52:34.723511    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0828 16:52:34.723614    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.748191    8351 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0828 16:52:34.750271    8351 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0828 16:52:34.752270    8351 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0828 16:52:34.754441    8351 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0828 16:52:34.756170    8351 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 16:52:34.756469    8351 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 16:52:34.756504    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0828 16:52:34.756597    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.758886    8351 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0828 16:52:34.760148    8351 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:34.760196    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 16:52:34.760290    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.778196    8351 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0828 16:52:34.780213    8351 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0828 16:52:34.782206    8351 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0828 16:52:34.787515    8351 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0828 16:52:34.789691    8351 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0828 16:52:34.789713    8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0828 16:52:34.789787    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.812082    8351 out.go:177]   - Using image docker.io/registry:2.8.3
	I0828 16:52:34.813787    8351 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0828 16:52:34.815479    8351 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0828 16:52:34.815499    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0828 16:52:34.815569    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.838580    8351 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0828 16:52:34.840420    8351 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 16:52:34.840443    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0828 16:52:34.840506    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.897866    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:34.901992    8351 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-161312"
	I0828 16:52:34.902035    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:34.902440    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:34.903431    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:34.916958    8351 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:34.916978    8351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 16:52:34.917038    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.937046    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:34.940273    8351 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0828 16:52:34.940457    8351 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0828 16:52:34.942197    8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0828 16:52:34.942220    8351 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0828 16:52:34.942291    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.945186    8351 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0828 16:52:34.947132    8351 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0828 16:52:34.950245    8351 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0828 16:52:34.950344    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0828 16:52:34.950450    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:34.977586    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:34.995966    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:35.023763    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:35.024306    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:35.040156    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:35.068409    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:35.080502    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:35.107006    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:35.135547    8351 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0828 16:52:35.137864    8351 out.go:177]   - Using image docker.io/busybox:stable
	I0828 16:52:35.144659    8351 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 16:52:35.144684    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0828 16:52:35.144752    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:35.145808    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	W0828 16:52:35.152985    8351 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0828 16:52:35.153075    8351 retry.go:31] will retry after 329.757604ms: ssh: handshake failed: EOF
	I0828 16:52:35.154393    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:35.186974    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:35.495099    8351 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 16:52:35.495161    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0828 16:52:35.519837    8351 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 16:52:35.519871    8351 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 16:52:35.544456    8351 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 16:52:35.544512    8351 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 16:52:35.576115    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 16:52:35.686686    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0828 16:52:35.710157    8351 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0828 16:52:35.710199    8351 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0828 16:52:35.819354    8351 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0828 16:52:35.819442    8351 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0828 16:52:35.895140    8351 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.356716908s)
	I0828 16:52:35.895230    8351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 16:52:35.895351    8351 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.430043624s)
	I0828 16:52:35.895532    8351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0828 16:52:35.939652    8351 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0828 16:52:35.939726    8351 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0828 16:52:35.948285    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:35.992454    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 16:52:36.088628    8351 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0828 16:52:36.088707    8351 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0828 16:52:36.091274    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:36.251102    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 16:52:36.318604    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 16:52:36.348416    8351 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0828 16:52:36.348489    8351 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0828 16:52:36.356880    8351 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0828 16:52:36.356955    8351 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0828 16:52:36.421646    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 16:52:36.437657    8351 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0828 16:52:36.437730    8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0828 16:52:36.446567    8351 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0828 16:52:36.446642    8351 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0828 16:52:36.462475    8351 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0828 16:52:36.462542    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0828 16:52:36.707683    8351 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0828 16:52:36.707758    8351 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0828 16:52:36.747249    8351 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0828 16:52:36.747389    8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0828 16:52:36.757960    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0828 16:52:36.824553    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0828 16:52:36.835859    8351 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0828 16:52:36.835933    8351 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0828 16:52:36.882365    8351 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0828 16:52:36.882442    8351 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0828 16:52:36.938439    8351 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0828 16:52:36.938509    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0828 16:52:37.024149    8351 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0828 16:52:37.024233    8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0828 16:52:37.108518    8351 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0828 16:52:37.108595    8351 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0828 16:52:37.288907    8351 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0828 16:52:37.288987    8351 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0828 16:52:37.388830    8351 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0828 16:52:37.388906    8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0828 16:52:37.437032    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0828 16:52:37.580294    8351 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:37.580313    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0828 16:52:37.745751    8351 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0828 16:52:37.745776    8351 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0828 16:52:37.812262    8351 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0828 16:52:37.812334    8351 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0828 16:52:37.907344    8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0828 16:52:37.907416    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0828 16:52:37.914594    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:38.073893    8351 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0828 16:52:38.073969    8351 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0828 16:52:38.182383    8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0828 16:52:38.182456    8351 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0828 16:52:38.335523    8351 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 16:52:38.335595    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0828 16:52:38.634328    8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0828 16:52:38.634389    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0828 16:52:38.709942    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 16:52:38.985489    8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0828 16:52:38.985564    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0828 16:52:39.377666    8351 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 16:52:39.377740    8351 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0828 16:52:39.976638    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 16:52:40.577577    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.00142585s)
	I0828 16:52:40.577608    8351 addons.go:475] Verifying addon metrics-server=true in "addons-161312"
	I0828 16:52:40.577647    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.890938259s)
	I0828 16:52:40.577696    8351 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.682146261s)
	I0828 16:52:40.577706    8351 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0828 16:52:40.578809    8351 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.683549884s)
	I0828 16:52:40.579560    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.631170691s)
	I0828 16:52:40.580110    8351 node_ready.go:35] waiting up to 6m0s for node "addons-161312" to be "Ready" ...
	I0828 16:52:40.623553    8351 node_ready.go:49] node "addons-161312" has status "Ready":"True"
	I0828 16:52:40.623579    8351 node_ready.go:38] duration metric: took 43.420322ms for node "addons-161312" to be "Ready" ...
	I0828 16:52:40.623590    8351 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 16:52:40.661413    8351 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4w259" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.081603    8351 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-161312" context rescaled to 1 replicas
	I0828 16:52:41.171955    8351 pod_ready.go:93] pod "coredns-6f6b679f8f-4w259" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:41.172028    8351 pod_ready.go:82] duration metric: took 510.537652ms for pod "coredns-6f6b679f8f-4w259" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.172056    8351 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hcl4z" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.178235    8351 pod_ready.go:93] pod "coredns-6f6b679f8f-hcl4z" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:41.178310    8351 pod_ready.go:82] duration metric: took 6.233972ms for pod "coredns-6f6b679f8f-hcl4z" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.178336    8351 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-161312" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.184621    8351 pod_ready.go:93] pod "etcd-addons-161312" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:41.184703    8351 pod_ready.go:82] duration metric: took 6.336958ms for pod "etcd-addons-161312" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.184729    8351 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-161312" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.190444    8351 pod_ready.go:93] pod "kube-apiserver-addons-161312" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:41.190514    8351 pod_ready.go:82] duration metric: took 5.748782ms for pod "kube-apiserver-addons-161312" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.190539    8351 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-161312" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.384216    8351 pod_ready.go:93] pod "kube-controller-manager-addons-161312" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:41.384288    8351 pod_ready.go:82] duration metric: took 193.726836ms for pod "kube-controller-manager-addons-161312" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.384404    8351 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-j6f7q" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.652341    8351 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0828 16:52:41.652511    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:41.679421    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:41.783559    8351 pod_ready.go:93] pod "kube-proxy-j6f7q" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:41.783581    8351 pod_ready.go:82] duration metric: took 399.151269ms for pod "kube-proxy-j6f7q" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:41.783591    8351 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-161312" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:42.198054    8351 pod_ready.go:93] pod "kube-scheduler-addons-161312" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:42.198084    8351 pod_ready.go:82] duration metric: took 414.485212ms for pod "kube-scheduler-addons-161312" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:42.198098    8351 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:42.334809    8351 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0828 16:52:42.747512    8351 addons.go:234] Setting addon gcp-auth=true in "addons-161312"
	I0828 16:52:42.747558    8351 host.go:66] Checking if "addons-161312" exists ...
	I0828 16:52:42.748020    8351 cli_runner.go:164] Run: docker container inspect addons-161312 --format={{.State.Status}}
	I0828 16:52:42.768607    8351 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0828 16:52:42.768672    8351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-161312
	I0828 16:52:42.797548    8351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/addons-161312/id_rsa Username:docker}
	I0828 16:52:44.206140    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:44.913665    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.921126115s)
	I0828 16:52:44.913701    8351 addons.go:475] Verifying addon ingress=true in "addons-161312"
	I0828 16:52:44.913873    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.82243821s)
	I0828 16:52:44.913923    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.662733647s)
	I0828 16:52:44.914017    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.595346076s)
	I0828 16:52:44.914065    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.492351748s)
	I0828 16:52:44.916869    8351 out.go:177] * Verifying ingress addon...
	I0828 16:52:44.920310    8351 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0828 16:52:44.929913    8351 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0828 16:52:44.929942    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:45.426134    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:45.926162    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:46.208011    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:46.425610    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:46.938507    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:47.447729    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:47.946048    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:48.074075    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.316018607s)
	I0828 16:52:48.074150    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.249535308s)
	I0828 16:52:48.074172    8351 addons.go:475] Verifying addon registry=true in "addons-161312"
	I0828 16:52:48.074485    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.637282724s)
	I0828 16:52:48.074726    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.160055057s)
	W0828 16:52:48.074760    8351 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 16:52:48.074806    8351 retry.go:31] will retry after 247.600913ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 16:52:48.074893    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.364863952s)
	I0828 16:52:48.076437    8351 out.go:177] * Verifying registry addon...
	I0828 16:52:48.076590    8351 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-161312 service yakd-dashboard -n yakd-dashboard
	
	I0828 16:52:48.079088    8351 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0828 16:52:48.180586    8351 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0828 16:52:48.180614    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:48.297495    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:48.322746    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:48.457264    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:48.551330    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.574634954s)
	I0828 16:52:48.551368    8351 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-161312"
	I0828 16:52:48.551419    8351 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.782791574s)
	I0828 16:52:48.554344    8351 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0828 16:52:48.554497    8351 out.go:177] * Verifying csi-hostpath-driver addon...
	I0828 16:52:48.556249    8351 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:52:48.558559    8351 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0828 16:52:48.558624    8351 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0828 16:52:48.559672    8351 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0828 16:52:48.597456    8351 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0828 16:52:48.597483    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:48.664960    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:48.739789    8351 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0828 16:52:48.739816    8351 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0828 16:52:48.855066    8351 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 16:52:48.855092    8351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0828 16:52:48.914918    8351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 16:52:48.927686    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:49.064704    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:49.083985    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:49.424915    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:49.568899    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:49.582695    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:49.925837    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:50.066548    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:50.086531    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:50.424785    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:50.514007    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.191212437s)
	I0828 16:52:50.566300    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:50.591517    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:50.717917    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:50.757658    8351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.842687037s)
	I0828 16:52:50.760846    8351 addons.go:475] Verifying addon gcp-auth=true in "addons-161312"
	I0828 16:52:50.763663    8351 out.go:177] * Verifying gcp-auth addon...
	I0828 16:52:50.766976    8351 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0828 16:52:50.770995    8351 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 16:52:50.924937    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:51.064672    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:51.083255    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:51.427613    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:51.564888    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:51.582923    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:51.928592    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:52.064566    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:52.083914    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:52.428704    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:52.565391    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:52.582937    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:52.924417    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:53.065887    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:53.083702    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:53.204645    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:53.424304    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:53.565674    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:53.584291    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:53.924935    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:54.064845    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:54.083505    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:54.425134    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:54.565836    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:54.583142    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:54.925037    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:55.066805    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:55.085534    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:55.206862    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:55.424749    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:55.566374    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:55.584115    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:55.924987    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:56.064613    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:56.083417    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:56.424728    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:56.564591    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:56.583291    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:56.926448    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:57.065056    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:57.083826    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:57.424341    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:57.565053    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:57.583464    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:57.710013    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:57.924496    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:58.064000    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:58.082883    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:58.424497    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:58.565492    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:58.582748    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:58.925516    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:59.064857    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:59.083026    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:59.424259    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:52:59.565171    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:52:59.582806    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:52:59.924887    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:00.232979    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:00.312422    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:00.314684    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:00.438523    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:00.564528    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:00.583031    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:00.925215    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:01.064866    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:01.084066    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:01.425352    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:01.566427    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:01.584635    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:01.925489    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:02.067896    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:02.083971    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:02.425049    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:02.566097    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:02.584106    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:02.704465    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:02.924915    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:03.065130    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:03.082987    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:03.425690    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:03.564981    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:03.582383    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:03.933770    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:04.065472    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:04.083092    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:04.424917    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:04.564788    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:04.583379    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:04.704876    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:04.924661    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:05.070396    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:05.083542    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:05.428566    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:05.565674    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:05.584116    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:05.925323    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:06.065449    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:06.082893    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:06.425980    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:06.565301    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:06.583397    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:06.705527    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:06.924839    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:07.064554    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:07.083599    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:07.424811    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:07.565107    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:07.583145    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:07.925716    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:08.067310    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:08.084532    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:08.425474    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:08.569691    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:08.583591    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:08.924745    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:09.069161    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:09.084005    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:09.204114    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:09.425207    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:09.565147    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:09.582891    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:09.924660    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:10.080195    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:10.100089    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:10.424684    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:10.564240    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:10.583906    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:10.925288    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:11.065836    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:11.083357    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:11.204415    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:11.424871    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:11.565357    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:11.583373    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:11.924799    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:12.066117    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:12.083110    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:12.425656    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:12.565540    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:12.583539    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:12.925123    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:13.065154    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:13.084472    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:13.205256    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:13.424554    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:13.566309    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:13.583338    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:13.925284    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:14.065465    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:14.083199    8351 kapi.go:107] duration metric: took 26.004110235s to wait for kubernetes.io/minikube-addons=registry ...
	I0828 16:53:14.430252    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:14.565684    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:14.929431    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:15.069884    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:15.207201    8351 pod_ready.go:103] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:15.434212    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:15.565387    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:15.930142    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:16.070799    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:16.425151    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:16.565261    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:16.925050    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:17.066514    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:17.205923    8351 pod_ready.go:93] pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:17.205996    8351 pod_ready.go:82] duration metric: took 35.007888776s for pod "metrics-server-84c5f94fbc-2gwmk" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:17.206021    8351 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-lbb78" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:17.213015    8351 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-lbb78" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:17.213087    8351 pod_ready.go:82] duration metric: took 7.042981ms for pod "nvidia-device-plugin-daemonset-lbb78" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:17.213117    8351 pod_ready.go:39] duration metric: took 36.589515501s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 16:53:17.213164    8351 api_server.go:52] waiting for apiserver process to appear ...
	I0828 16:53:17.213257    8351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:53:17.233327    8351 api_server.go:72] duration metric: took 42.768099334s to wait for apiserver process to appear ...
	I0828 16:53:17.233388    8351 api_server.go:88] waiting for apiserver healthz status ...
	I0828 16:53:17.233432    8351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0828 16:53:17.242507    8351 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0828 16:53:17.243684    8351 api_server.go:141] control plane version: v1.31.0
	I0828 16:53:17.243706    8351 api_server.go:131] duration metric: took 10.288445ms to wait for apiserver health ...
	I0828 16:53:17.243714    8351 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 16:53:17.254976    8351 system_pods.go:59] 17 kube-system pods found
	I0828 16:53:17.255070    8351 system_pods.go:61] "coredns-6f6b679f8f-hcl4z" [9a756596-b7bf-46f4-980d-8062d8e5aa1f] Running
	I0828 16:53:17.255097    8351 system_pods.go:61] "csi-hostpath-attacher-0" [c4679fdb-0197-47d9-b556-c74ff2f7b4d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 16:53:17.255133    8351 system_pods.go:61] "csi-hostpath-resizer-0" [e762894a-c229-4849-94fb-b1068d4897a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 16:53:17.255158    8351 system_pods.go:61] "csi-hostpathplugin-772lg" [5b927797-2d55-4d8e-982a-f8f23f5dd1e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 16:53:17.255178    8351 system_pods.go:61] "etcd-addons-161312" [e35c016f-6a9a-4ca9-8ef8-138c9453a446] Running
	I0828 16:53:17.255198    8351 system_pods.go:61] "kube-apiserver-addons-161312" [533f4348-327f-4995-8b03-e3b792d2cb4e] Running
	I0828 16:53:17.255217    8351 system_pods.go:61] "kube-controller-manager-addons-161312" [0ac59ac9-39b6-474d-b926-ba33667a7ad3] Running
	I0828 16:53:17.255251    8351 system_pods.go:61] "kube-ingress-dns-minikube" [82ef5f44-8529-4660-9df8-d1fd1e34055c] Running
	I0828 16:53:17.255270    8351 system_pods.go:61] "kube-proxy-j6f7q" [df5e438a-974c-4830-943c-d4b8a0c301cb] Running
	I0828 16:53:17.255288    8351 system_pods.go:61] "kube-scheduler-addons-161312" [2fb79932-e90a-44fc-831a-7f9b52a380bc] Running
	I0828 16:53:17.255328    8351 system_pods.go:61] "metrics-server-84c5f94fbc-2gwmk" [dd1f5b27-27c7-4ddf-973e-855eb2bbbe37] Running
	I0828 16:53:17.255350    8351 system_pods.go:61] "nvidia-device-plugin-daemonset-lbb78" [4b16be02-3cce-4ec1-9435-fabfc1c55ab7] Running
	I0828 16:53:17.255369    8351 system_pods.go:61] "registry-6fb4cdfc84-2d9gq" [c7dd58ff-e9b5-4511-9a22-023705b9fdfe] Running
	I0828 16:53:17.255387    8351 system_pods.go:61] "registry-proxy-8svd4" [c0749a82-4329-4dc6-92f9-0bd490e250bc] Running
	I0828 16:53:17.255409    8351 system_pods.go:61] "snapshot-controller-56fcc65765-h2qqv" [1a3268d8-8a1e-4024-a144-00c9e97e7db0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:17.255439    8351 system_pods.go:61] "snapshot-controller-56fcc65765-qvk9j" [572c18e2-432c-42ea-bec9-9ef3707837c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:17.255464    8351 system_pods.go:61] "storage-provisioner" [496c4e27-2d97-4e7f-acac-7e8dcd1adbc7] Running
	I0828 16:53:17.255493    8351 system_pods.go:74] duration metric: took 11.772389ms to wait for pod list to return data ...
	I0828 16:53:17.255516    8351 default_sa.go:34] waiting for default service account to be created ...
	I0828 16:53:17.258777    8351 default_sa.go:45] found service account: "default"
	I0828 16:53:17.258835    8351 default_sa.go:55] duration metric: took 3.292314ms for default service account to be created ...
	I0828 16:53:17.258866    8351 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 16:53:17.268524    8351 system_pods.go:86] 17 kube-system pods found
	I0828 16:53:17.268561    8351 system_pods.go:89] "coredns-6f6b679f8f-hcl4z" [9a756596-b7bf-46f4-980d-8062d8e5aa1f] Running
	I0828 16:53:17.268571    8351 system_pods.go:89] "csi-hostpath-attacher-0" [c4679fdb-0197-47d9-b556-c74ff2f7b4d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 16:53:17.268579    8351 system_pods.go:89] "csi-hostpath-resizer-0" [e762894a-c229-4849-94fb-b1068d4897a9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 16:53:17.268586    8351 system_pods.go:89] "csi-hostpathplugin-772lg" [5b927797-2d55-4d8e-982a-f8f23f5dd1e8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 16:53:17.268592    8351 system_pods.go:89] "etcd-addons-161312" [e35c016f-6a9a-4ca9-8ef8-138c9453a446] Running
	I0828 16:53:17.268596    8351 system_pods.go:89] "kube-apiserver-addons-161312" [533f4348-327f-4995-8b03-e3b792d2cb4e] Running
	I0828 16:53:17.268605    8351 system_pods.go:89] "kube-controller-manager-addons-161312" [0ac59ac9-39b6-474d-b926-ba33667a7ad3] Running
	I0828 16:53:17.268610    8351 system_pods.go:89] "kube-ingress-dns-minikube" [82ef5f44-8529-4660-9df8-d1fd1e34055c] Running
	I0828 16:53:17.268620    8351 system_pods.go:89] "kube-proxy-j6f7q" [df5e438a-974c-4830-943c-d4b8a0c301cb] Running
	I0828 16:53:17.268627    8351 system_pods.go:89] "kube-scheduler-addons-161312" [2fb79932-e90a-44fc-831a-7f9b52a380bc] Running
	I0828 16:53:17.268631    8351 system_pods.go:89] "metrics-server-84c5f94fbc-2gwmk" [dd1f5b27-27c7-4ddf-973e-855eb2bbbe37] Running
	I0828 16:53:17.268635    8351 system_pods.go:89] "nvidia-device-plugin-daemonset-lbb78" [4b16be02-3cce-4ec1-9435-fabfc1c55ab7] Running
	I0828 16:53:17.268645    8351 system_pods.go:89] "registry-6fb4cdfc84-2d9gq" [c7dd58ff-e9b5-4511-9a22-023705b9fdfe] Running
	I0828 16:53:17.268648    8351 system_pods.go:89] "registry-proxy-8svd4" [c0749a82-4329-4dc6-92f9-0bd490e250bc] Running
	I0828 16:53:17.268655    8351 system_pods.go:89] "snapshot-controller-56fcc65765-h2qqv" [1a3268d8-8a1e-4024-a144-00c9e97e7db0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:17.268667    8351 system_pods.go:89] "snapshot-controller-56fcc65765-qvk9j" [572c18e2-432c-42ea-bec9-9ef3707837c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:17.268674    8351 system_pods.go:89] "storage-provisioner" [496c4e27-2d97-4e7f-acac-7e8dcd1adbc7] Running
	I0828 16:53:17.268681    8351 system_pods.go:126] duration metric: took 9.796624ms to wait for k8s-apps to be running ...
	I0828 16:53:17.268692    8351 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 16:53:17.268747    8351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 16:53:17.284186    8351 system_svc.go:56] duration metric: took 15.485363ms WaitForService to wait for kubelet
	I0828 16:53:17.284213    8351 kubeadm.go:582] duration metric: took 42.81899845s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 16:53:17.284237    8351 node_conditions.go:102] verifying NodePressure condition ...
	I0828 16:53:17.287606    8351 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0828 16:53:17.287638    8351 node_conditions.go:123] node cpu capacity is 2
	I0828 16:53:17.287651    8351 node_conditions.go:105] duration metric: took 3.408559ms to run NodePressure ...
	I0828 16:53:17.287664    8351 start.go:241] waiting for startup goroutines ...
	I0828 16:53:17.426348    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:17.565229    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:17.925386    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:18.069424    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:18.424850    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:18.565127    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:18.925597    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:19.064917    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:19.429622    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:19.565863    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:19.925114    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:20.066614    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:20.424504    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:20.565522    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:20.925956    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:21.065315    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:21.427356    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:21.565691    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:21.924561    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:22.066152    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:22.437939    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:22.564134    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:22.924874    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:23.065725    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:23.425030    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:23.566025    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:23.925194    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:24.065469    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:24.424466    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:24.564838    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:24.924830    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:25.065037    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:25.424584    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:25.564930    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:25.925509    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:26.067721    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:26.425255    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:26.566026    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:26.925492    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:27.068214    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:27.424827    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:27.564486    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:27.925498    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:28.065274    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:28.426072    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:28.574314    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:28.926543    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:29.065899    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:29.427085    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:29.565783    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:29.962337    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:30.076291    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:30.425780    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:30.564077    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:30.924961    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:31.066044    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:31.425714    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:31.564433    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:31.925422    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:32.065417    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:32.424498    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:32.566587    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:32.925412    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:33.066095    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:33.424447    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:33.565281    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:33.924123    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:34.064732    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:34.424788    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:34.564761    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:34.926354    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:35.079103    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:35.424549    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:35.564178    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:35.924975    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:36.065260    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:36.424592    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:36.566752    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:36.936069    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:37.067212    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:37.424907    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:37.565670    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:37.924461    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:38.075045    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:38.425429    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:38.565830    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:38.925111    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:39.065423    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:39.425010    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:39.564672    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:39.925212    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:40.077708    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:40.425213    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:40.565414    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:40.925348    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:41.065247    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:41.424605    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:41.565102    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:41.925122    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:42.065296    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:42.427570    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:42.569391    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:42.925182    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:43.065035    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:43.424563    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:43.568650    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:43.925475    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:44.065146    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:44.424656    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:44.570779    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:44.924539    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:45.113566    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:45.425000    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:45.565027    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:45.926335    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:46.065689    8351 kapi.go:107] duration metric: took 57.506016712s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0828 16:53:46.424499    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:46.924736    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:47.424809    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:47.925324    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:48.424426    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:48.925122    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:49.424827    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:49.925249    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:50.425214    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:50.925124    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:51.424553    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:51.925908    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:52.425991    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:52.924617    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:53.424636    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:53.925414    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:54.424340    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:54.925465    8351 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:55.432894    8351 kapi.go:107] duration metric: took 1m10.512581055s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0828 16:54:14.271015    8351 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 16:54:14.271040    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:14.770934    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:15.270657    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:15.770619    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:16.271073    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:16.770213    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:17.270109    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:17.771627    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:18.271228    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:18.771266    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:19.270891    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:19.771065    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:20.271254    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:20.770988    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:21.271082    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:21.771337    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:22.271744    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:22.771128    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:23.270985    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:23.770705    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:24.270839    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:24.771030    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:25.270484    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:25.769927    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:26.270624    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:26.770680    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:27.275993    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:27.770055    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:28.271535    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:28.771763    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:29.270245    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:29.771328    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:30.272637    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:30.771018    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:31.271037    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:31.770738    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:32.272701    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:32.770464    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:33.271096    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:33.770694    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:34.270555    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:34.770887    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:35.270858    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:35.770171    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:36.269922    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:36.770380    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:37.271058    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:37.770051    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:38.271444    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:38.770395    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:39.271141    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:39.770574    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:40.271497    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:40.769970    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:41.270269    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:41.771241    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:42.271710    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:42.771175    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:43.276584    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:43.770552    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:44.270789    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:44.770673    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:45.271118    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:45.771804    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:46.272275    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:46.770291    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:47.270337    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:47.770978    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:48.271625    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:48.770602    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:49.270867    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:49.771217    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:50.271394    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:50.771365    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:51.271500    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:51.770299    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:52.271233    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:52.770662    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:53.270165    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:53.786109    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:54.271323    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:54.770652    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:55.270670    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:55.770342    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:56.270861    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:56.770518    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:57.271354    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:57.771810    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:58.271086    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:58.770943    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:59.270438    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:59.770554    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:00.304075    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:00.770285    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:01.276314    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:01.770820    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:02.270041    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:02.771373    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:03.271826    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:03.770363    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:04.271220    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:04.771412    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:05.270004    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:05.770773    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:06.270482    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:06.771067    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:07.270757    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:07.770988    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:08.269935    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:08.770910    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:09.270847    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:09.770675    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:10.270674    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:10.771374    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:11.271396    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:11.769817    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:12.270935    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:12.770837    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:13.270856    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:13.770848    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:14.271588    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:14.771189    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:15.271088    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:15.770881    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:16.271587    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:16.770739    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:17.270361    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:17.770909    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:18.270751    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:18.771635    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:19.270900    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:19.771436    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:20.272220    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:20.771423    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:21.270782    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:21.772013    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:22.270889    8351 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:55:22.770673    8351 kapi.go:107] duration metric: took 2m32.003695392s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0828 16:55:22.772724    8351 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-161312 cluster.
	I0828 16:55:22.774548    8351 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0828 16:55:22.776166    8351 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0828 16:55:22.777784    8351 out.go:177] * Enabled addons: metrics-server, cloud-spanner, default-storageclass, storage-provisioner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, volcano, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0828 16:55:22.779689    8351 addons.go:510] duration metric: took 2m48.314097913s for enable addons: enabled=[metrics-server cloud-spanner default-storageclass storage-provisioner ingress-dns nvidia-device-plugin storage-provisioner-rancher volcano inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0828 16:55:22.779748    8351 start.go:246] waiting for cluster config update ...
	I0828 16:55:22.779770    8351 start.go:255] writing updated cluster config ...
	I0828 16:55:22.780072    8351 ssh_runner.go:195] Run: rm -f paused
	I0828 16:55:23.111628    8351 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 16:55:23.113812    8351 out.go:177] * Done! kubectl is now configured to use "addons-161312" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 28 17:05:01 addons-161312 dockerd[1280]: time="2024-08-28T17:05:01.593510585Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 28 17:05:01 addons-161312 dockerd[1280]: time="2024-08-28T17:05:01.596549935Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 28 17:05:04 addons-161312 dockerd[1280]: time="2024-08-28T17:05:04.508620296Z" level=info msg="ignoring event" container=f4d0621886e0ae872e0b13263f09c5ab4baec0f18294fa8e59590a8653f671a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:04 addons-161312 dockerd[1280]: time="2024-08-28T17:05:04.539506643Z" level=info msg="ignoring event" container=7b3d4772f5476d3a16eca71b0da5fd97041d8dc90a4b4d96e4577ea2d13cd583 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:04 addons-161312 dockerd[1280]: time="2024-08-28T17:05:04.688365767Z" level=info msg="ignoring event" container=e3b2e60f39e61be2535dff6db43b9699f213a698ba34a736b9606ef36427950a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:04 addons-161312 dockerd[1280]: time="2024-08-28T17:05:04.705271570Z" level=info msg="ignoring event" container=6cb12e876898070ee96bba67ec6163bde92256fa667356782999fb767fa74894 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:11 addons-161312 dockerd[1280]: time="2024-08-28T17:05:11.126444143Z" level=info msg="ignoring event" container=548e9d0494fb0064b7c5c24121148f52647c9d00dc9631767d4812a71fcf5566 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:11 addons-161312 dockerd[1280]: time="2024-08-28T17:05:11.298454385Z" level=info msg="ignoring event" container=7ed2932e3a60211057436e76f31d8faa9fabb4e813b9109bff5db275d79017e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:12 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a01e9f709c2810f785523176ef27c88f4235f7cc39ae11b5d0021bacdcf85d84/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 28 17:05:12 addons-161312 dockerd[1280]: time="2024-08-28T17:05:12.214736315Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Aug 28 17:05:12 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:12Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Aug 28 17:05:12 addons-161312 dockerd[1280]: time="2024-08-28T17:05:12.961092036Z" level=info msg="ignoring event" container=29de7d5e4fc3d4d7e854a18bbd1002acdd55e45be797bf81c7bff49340088097 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:14 addons-161312 dockerd[1280]: time="2024-08-28T17:05:14.210746428Z" level=info msg="ignoring event" container=a01e9f709c2810f785523176ef27c88f4235f7cc39ae11b5d0021bacdcf85d84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:15 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/913bdb5f3117e54c5d04f1834c7cbd3ede2da7d18ccc6da80401aeed4f60f233/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 28 17:05:16 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:16Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Aug 28 17:05:16 addons-161312 dockerd[1280]: time="2024-08-28T17:05:16.889671118Z" level=info msg="ignoring event" container=7718a307feb581b03178ef8fdabe2a9c50a97b8e11cff86d684a347108a265a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:18 addons-161312 dockerd[1280]: time="2024-08-28T17:05:18.289756455Z" level=info msg="ignoring event" container=be486ce9070525a402077008744dc9ca35a7f3c70e26e3833a0ba1507ba134b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:18 addons-161312 dockerd[1280]: time="2024-08-28T17:05:18.413992376Z" level=info msg="ignoring event" container=913bdb5f3117e54c5d04f1834c7cbd3ede2da7d18ccc6da80401aeed4f60f233 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:19 addons-161312 dockerd[1280]: time="2024-08-28T17:05:19.027249394Z" level=info msg="ignoring event" container=99036035e792b95f9cd5d7a982905f48606ae1eee0c1814210d9dc9a31f994db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:19 addons-161312 dockerd[1280]: time="2024-08-28T17:05:19.142688360Z" level=info msg="ignoring event" container=a69dc079b82a7807dcec21d632cabad0231019793363657acd1d80e02c11f849 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:19 addons-161312 dockerd[1280]: time="2024-08-28T17:05:19.383470730Z" level=info msg="ignoring event" container=56543a0b8482e6f3cc3321b83bbe45bbf71e971da9f7df0854751f45b48bae2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:19 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-8svd4_kube-system\": unexpected command output nsenter: cannot open /proc/3644/ns/net: No such file or directory\n with error: exit status 1"
	Aug 28 17:05:19 addons-161312 dockerd[1280]: time="2024-08-28T17:05:19.691463939Z" level=info msg="ignoring event" container=8b27242d5b4eee6de4916df367071864e5b75848fe8b35437be8aae2ef824fb4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:05:20 addons-161312 cri-dockerd[1536]: time="2024-08-28T17:05:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a093c9115a3588705897d0d0e40d4d99d3e86bf9c8b5bae061d9507b42b321dc/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 28 17:05:20 addons-161312 dockerd[1280]: time="2024-08-28T17:05:20.522512690Z" level=info msg="ignoring event" container=8432ee7d1c481b56a1686a6e4190a9fd133782b412d9ec1d822220f75324796c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	8432ee7d1c481       fc9db2894f4e4                                                                                                                Less than a second ago   Exited              helper-pod                0                   a093c9115a358       helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7
	29de7d5e4fc3d       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              8 seconds ago            Exited              helper-pod                0                   a01e9f709c281       helper-pod-create-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7
	0c8e8e8da0b8a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc            49 seconds ago           Exited              gadget                    7                   142c22ab6df73       gadget-ml8j2
	1554916b100cf       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago            Running             gcp-auth                  0                   2a2572df2af14       gcp-auth-89d5ffd79-cmxxh
	cd0483b058900       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago           Running             controller                0                   f863098cd56a0       ingress-nginx-controller-bc57996ff-xfz6v
	00b7abf90d0c9       420193b27261a                                                                                                                11 minutes ago           Exited              patch                     1                   5fdb0ece2aa34       ingress-nginx-admission-patch-vlb58
	adc70c6243974       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago           Exited              create                    0                   495a97876574a       ingress-nginx-admission-create-klgcd
	68940392a1c43       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago           Running             local-path-provisioner    0                   81e0a79b34fc6       local-path-provisioner-86d989889c-dsnx5
	4d66551822250       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago           Running             metrics-server            0                   3d6b786a107f1       metrics-server-84c5f94fbc-2gwmk
	5fbb650a6d9a1       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago           Running             cloud-spanner-emulator    0                   b389c2e73075a       cloud-spanner-emulator-769b77f747-8spwt
	044e1b7fedf76       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago           Running             minikube-ingress-dns      0                   5882e8babd681       kube-ingress-dns-minikube
	43100bccad9bb       ba04bb24b9575                                                                                                                12 minutes ago           Running             storage-provisioner       0                   92bfd727a018c       storage-provisioner
	e4d8a13c0eca6       2437cf7621777                                                                                                                12 minutes ago           Running             coredns                   0                   a3a3ff086dd15       coredns-6f6b679f8f-hcl4z
	9818f836df069       71d55d66fd4ee                                                                                                                12 minutes ago           Running             kube-proxy                0                   027916b115c06       kube-proxy-j6f7q
	4e27c6ac85dc0       fcb0683e6bdbd                                                                                                                12 minutes ago           Running             kube-controller-manager   0                   d1c04a6a9ffe3       kube-controller-manager-addons-161312
	ae1dc3c789881       fbbbd428abb4d                                                                                                                12 minutes ago           Running             kube-scheduler            0                   c7757778c6d63       kube-scheduler-addons-161312
	71776afd39911       27e3830e14027                                                                                                                12 minutes ago           Running             etcd                      0                   3c35e2fa9c0ad       etcd-addons-161312
	a19a095ebea3e       cd0f0ae0ec9e0                                                                                                                12 minutes ago           Running             kube-apiserver            0                   5bda477f20731       kube-apiserver-addons-161312
	
	
	==> controller_ingress [cd0483b05890] <==
	W0828 16:53:55.163724       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0828 16:53:55.164110       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0828 16:53:55.179502       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/arm64"
	I0828 16:53:55.733876       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0828 16:53:55.763286       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0828 16:53:55.777080       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0828 16:53:55.795784       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f38accfa-91e9-43ae-b242-cbccb64c4b02", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0828 16:53:55.802167       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"aeec0a62-3ec6-4ae6-8077-709c338fac49", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0828 16:53:55.802638       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"95fadb9b-3723-4373-8e34-fd0f2383e603", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0828 16:53:56.978480       7 nginx.go:317] "Starting NGINX process"
	I0828 16:53:56.978675       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0828 16:53:56.979193       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0828 16:53:56.979417       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0828 16:53:56.996986       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0828 16:53:56.997126       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-xfz6v"
	I0828 16:53:57.007039       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-xfz6v" node="addons-161312"
	I0828 16:53:57.034221       7 controller.go:213] "Backend successfully reloaded"
	I0828 16:53:57.034513       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0828 16:53:57.034634       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-xfz6v", UID:"fbdde173-be80-4530-8596-b91cbae0540e", APIVersion:"v1", ResourceVersion:"1230", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [e4d8a13c0eca] <==
	[INFO] 10.244.0.7:37172 - 51637 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096981s
	[INFO] 10.244.0.7:40454 - 60681 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002620176s
	[INFO] 10.244.0.7:40454 - 32523 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002598778s
	[INFO] 10.244.0.7:48561 - 4416 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00016453s
	[INFO] 10.244.0.7:48561 - 44614 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000095446s
	[INFO] 10.244.0.7:33359 - 29396 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000085313s
	[INFO] 10.244.0.7:33359 - 40667 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000047456s
	[INFO] 10.244.0.7:57900 - 24274 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000043846s
	[INFO] 10.244.0.7:57900 - 52175 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00003437s
	[INFO] 10.244.0.7:34971 - 40716 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00004502s
	[INFO] 10.244.0.7:34971 - 3634 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036289s
	[INFO] 10.244.0.7:46595 - 65471 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001840335s
	[INFO] 10.244.0.7:46595 - 63933 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001700821s
	[INFO] 10.244.0.7:51720 - 17491 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000055998s
	[INFO] 10.244.0.7:51720 - 63313 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000062364s
	[INFO] 10.244.0.25:32917 - 37251 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000383306s
	[INFO] 10.244.0.25:41148 - 22610 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000323707s
	[INFO] 10.244.0.25:53401 - 30384 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152069s
	[INFO] 10.244.0.25:37595 - 1848 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000141304s
	[INFO] 10.244.0.25:47820 - 14472 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127339s
	[INFO] 10.244.0.25:33809 - 57194 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108796s
	[INFO] 10.244.0.25:58074 - 19171 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.0027328s
	[INFO] 10.244.0.25:36644 - 3267 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002977116s
	[INFO] 10.244.0.25:37481 - 4397 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002412286s
	[INFO] 10.244.0.25:43196 - 43493 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002487721s
	
	
	==> describe nodes <==
	Name:               addons-161312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-161312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=addons-161312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T16_52_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-161312
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 16:52:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-161312
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:05:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:01:10 +0000   Wed, 28 Aug 2024 16:52:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:01:10 +0000   Wed, 28 Aug 2024 16:52:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:01:10 +0000   Wed, 28 Aug 2024 16:52:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:01:10 +0000   Wed, 28 Aug 2024 16:52:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-161312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 192e84d6552845128e8e3999ee1f3130
	  System UUID:                65678d4a-b43f-4c7c-940d-443e3c36e38e
	  Boot ID:                    4e364349-6d08-4a99-bc76-4bf6d585326a
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-769b77f747-8spwt                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-ml8j2                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-cmxxh                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-xfz6v                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-hcl4z                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-161312                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-161312                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-161312                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-j6f7q                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-161312                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-2gwmk                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-86d989889c-dsnx5                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-161312 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-161312 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-161312 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-161312 event: Registered Node addons-161312 in Controller
	
	
	==> dmesg <==
	[Aug28 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016366] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.491983] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065955] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002669] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.018580] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.005263] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003912] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.767668] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.821413] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [71776afd3991] <==
	{"level":"info","ts":"2024-08-28T16:52:22.578046Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-28T16:52:22.578060Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-28T16:52:22.647690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-28T16:52:22.647732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-28T16:52:22.647755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-28T16:52:22.647775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-28T16:52:22.647781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-28T16:52:22.647791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-28T16:52:22.647799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-28T16:52:22.651515Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-161312 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T16:52:22.651670Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T16:52:22.651956Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T16:52:22.652042Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T16:52:22.652153Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T16:52:22.652181Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T16:52:22.652791Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T16:52:22.653673Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-28T16:52:22.654271Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T16:52:22.655055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-28T16:52:22.655122Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T16:52:22.655192Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T16:52:22.655211Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:02:24.061850Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1842}
	{"level":"info","ts":"2024-08-28T17:02:24.107504Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1842,"took":"45.087181ms","hash":3577647896,"current-db-size-bytes":9084928,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4907008,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-08-28T17:02:24.107554Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3577647896,"revision":1842,"compact-revision":-1}
	
	
	==> gcp-auth [1554916b100c] <==
	2024/08/28 16:55:21 GCP Auth Webhook started!
	2024/08/28 16:55:39 Ready to marshal response ...
	2024/08/28 16:55:39 Ready to write response ...
	2024/08/28 16:55:40 Ready to marshal response ...
	2024/08/28 16:55:40 Ready to write response ...
	2024/08/28 16:56:04 Ready to marshal response ...
	2024/08/28 16:56:04 Ready to write response ...
	2024/08/28 16:56:04 Ready to marshal response ...
	2024/08/28 16:56:04 Ready to write response ...
	2024/08/28 16:56:04 Ready to marshal response ...
	2024/08/28 16:56:04 Ready to write response ...
	2024/08/28 17:04:18 Ready to marshal response ...
	2024/08/28 17:04:18 Ready to write response ...
	2024/08/28 17:04:30 Ready to marshal response ...
	2024/08/28 17:04:30 Ready to write response ...
	2024/08/28 17:04:48 Ready to marshal response ...
	2024/08/28 17:04:48 Ready to write response ...
	2024/08/28 17:05:11 Ready to marshal response ...
	2024/08/28 17:05:11 Ready to write response ...
	2024/08/28 17:05:11 Ready to marshal response ...
	2024/08/28 17:05:11 Ready to write response ...
	2024/08/28 17:05:19 Ready to marshal response ...
	2024/08/28 17:05:19 Ready to write response ...
	
	
	==> kernel <==
	 17:05:21 up 47 min,  0 users,  load average: 1.30, 1.17, 0.96
	Linux addons-161312 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [a19a095ebea3] <==
	I0828 16:55:55.093054       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0828 16:55:55.405417       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0828 16:55:55.450727       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0828 16:55:55.594003       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0828 16:55:55.740262       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0828 16:55:56.071480       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0828 16:55:56.094489       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0828 16:55:56.137551       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0828 16:55:56.210788       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0828 16:55:56.655661       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0828 16:55:56.799629       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0828 17:04:38.211417       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0828 17:05:04.281089       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:05:04.281996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:05:04.311523       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:05:04.311746       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:05:04.334194       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:05:04.334252       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:05:04.344245       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:05:04.344483       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:05:04.389252       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:05:04.389735       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0828 17:05:05.343828       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0828 17:05:05.389586       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0828 17:05:05.402394       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [4e27c6ac85dc] <==
	E0828 17:05:06.599540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:05:06.942912       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:06.942953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:05:08.106022       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:08.106072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:05:08.729795       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:08.729842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:05:09.204500       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:09.204547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:05:11.729194       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:11.729238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:05:12.248490       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:12.248593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:05:12.550038       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:12.550086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:05:12.643435       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:12.643484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:05:14.120937       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:14.120985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:05:14.891740       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:14.892057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:05:18.917220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="4.513µs"
	W0828 17:05:20.183660       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:20.183704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:05:20.585217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="5.169µs"
	
	
	==> kube-proxy [9818f836df06] <==
	I0828 16:52:35.667755       1 server_linux.go:66] "Using iptables proxy"
	I0828 16:52:35.796830       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0828 16:52:35.797031       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 16:52:35.832021       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0828 16:52:35.832076       1 server_linux.go:169] "Using iptables Proxier"
	I0828 16:52:35.836909       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 16:52:35.840586       1 server.go:483] "Version info" version="v1.31.0"
	I0828 16:52:35.840627       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 16:52:35.862133       1 config.go:197] "Starting service config controller"
	I0828 16:52:35.862196       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 16:52:35.862265       1 config.go:104] "Starting endpoint slice config controller"
	I0828 16:52:35.862271       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 16:52:35.864610       1 config.go:326] "Starting node config controller"
	I0828 16:52:35.864625       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 16:52:35.963283       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 16:52:35.963560       1 shared_informer.go:320] Caches are synced for service config
	I0828 16:52:35.964948       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ae1dc3c78988] <==
	I0828 16:52:26.548771       1 serving.go:386] Generated self-signed cert in-memory
	W0828 16:52:28.088874       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 16:52:28.088911       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 16:52:28.088922       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 16:52:28.088930       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 16:52:28.111220       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0828 16:52:28.111491       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 16:52:28.114120       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 16:52:28.114374       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 16:52:28.114942       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0828 16:52:28.115104       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0828 16:52:28.117491       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 16:52:28.117732       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0828 16:52:29.314812       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 17:05:18 addons-161312 kubelet[2324]: I0828 17:05:18.600473    2324 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/641ea3b4-9444-43cc-88b0-461a677bd1a7-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7" (OuterVolumeSpecName: "data") pod "641ea3b4-9444-43cc-88b0-461a677bd1a7" (UID: "641ea3b4-9444-43cc-88b0-461a677bd1a7"). InnerVolumeSpecName "pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 28 17:05:18 addons-161312 kubelet[2324]: I0828 17:05:18.602445    2324 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/641ea3b4-9444-43cc-88b0-461a677bd1a7-kube-api-access-zmn7c" (OuterVolumeSpecName: "kube-api-access-zmn7c") pod "641ea3b4-9444-43cc-88b0-461a677bd1a7" (UID: "641ea3b4-9444-43cc-88b0-461a677bd1a7"). InnerVolumeSpecName "kube-api-access-zmn7c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:05:18 addons-161312 kubelet[2324]: I0828 17:05:18.700873    2324 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zmn7c\" (UniqueName: \"kubernetes.io/projected/641ea3b4-9444-43cc-88b0-461a677bd1a7-kube-api-access-zmn7c\") on node \"addons-161312\" DevicePath \"\""
	Aug 28 17:05:18 addons-161312 kubelet[2324]: I0828 17:05:18.700917    2324 reconciler_common.go:288] "Volume detached for volume \"pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\" (UniqueName: \"kubernetes.io/host-path/641ea3b4-9444-43cc-88b0-461a677bd1a7-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\") on node \"addons-161312\" DevicePath \"\""
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.425928    2324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f982f0e-567d-48a5-b66c-c6bf898fc4b7" path="/var/lib/kubelet/pods/3f982f0e-567d-48a5-b66c-c6bf898fc4b7/volumes"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.426374    2324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="641ea3b4-9444-43cc-88b0-461a677bd1a7" path="/var/lib/kubelet/pods/641ea3b4-9444-43cc-88b0-461a677bd1a7/volumes"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.522460    2324 scope.go:117] "RemoveContainer" containerID="99036035e792b95f9cd5d7a982905f48606ae1eee0c1814210d9dc9a31f994db"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.617981    2324 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvlm2\" (UniqueName: \"kubernetes.io/projected/c7dd58ff-e9b5-4511-9a22-023705b9fdfe-kube-api-access-dvlm2\") pod \"c7dd58ff-e9b5-4511-9a22-023705b9fdfe\" (UID: \"c7dd58ff-e9b5-4511-9a22-023705b9fdfe\") "
	Aug 28 17:05:19 addons-161312 kubelet[2324]: E0828 17:05:19.622482    2324 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="641ea3b4-9444-43cc-88b0-461a677bd1a7" containerName="busybox"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: E0828 17:05:19.622513    2324 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7dd58ff-e9b5-4511-9a22-023705b9fdfe" containerName="registry"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.622553    2324 memory_manager.go:354] "RemoveStaleState removing state" podUID="641ea3b4-9444-43cc-88b0-461a677bd1a7" containerName="busybox"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.622563    2324 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7dd58ff-e9b5-4511-9a22-023705b9fdfe" containerName="registry"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.624671    2324 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7dd58ff-e9b5-4511-9a22-023705b9fdfe-kube-api-access-dvlm2" (OuterVolumeSpecName: "kube-api-access-dvlm2") pod "c7dd58ff-e9b5-4511-9a22-023705b9fdfe" (UID: "c7dd58ff-e9b5-4511-9a22-023705b9fdfe"). InnerVolumeSpecName "kube-api-access-dvlm2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.625565    2324 scope.go:117] "RemoveContainer" containerID="7718a307feb581b03178ef8fdabe2a9c50a97b8e11cff86d684a347108a265a7"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.718625    2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bdf71fb2-5820-4809-bdce-86fb11ea7b8f-gcp-creds\") pod \"helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\" (UID: \"bdf71fb2-5820-4809-bdce-86fb11ea7b8f\") " pod="local-path-storage/helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.718727    2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p5qh\" (UniqueName: \"kubernetes.io/projected/bdf71fb2-5820-4809-bdce-86fb11ea7b8f-kube-api-access-6p5qh\") pod \"helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\" (UID: \"bdf71fb2-5820-4809-bdce-86fb11ea7b8f\") " pod="local-path-storage/helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.718794    2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/bdf71fb2-5820-4809-bdce-86fb11ea7b8f-script\") pod \"helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\" (UID: \"bdf71fb2-5820-4809-bdce-86fb11ea7b8f\") " pod="local-path-storage/helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.718838    2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/bdf71fb2-5820-4809-bdce-86fb11ea7b8f-data\") pod \"helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7\" (UID: \"bdf71fb2-5820-4809-bdce-86fb11ea7b8f\") " pod="local-path-storage/helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7"
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.718901    2324 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dvlm2\" (UniqueName: \"kubernetes.io/projected/c7dd58ff-e9b5-4511-9a22-023705b9fdfe-kube-api-access-dvlm2\") on node \"addons-161312\" DevicePath \"\""
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.920424    2324 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-984v9\" (UniqueName: \"kubernetes.io/projected/c0749a82-4329-4dc6-92f9-0bd490e250bc-kube-api-access-984v9\") pod \"c0749a82-4329-4dc6-92f9-0bd490e250bc\" (UID: \"c0749a82-4329-4dc6-92f9-0bd490e250bc\") "
	Aug 28 17:05:19 addons-161312 kubelet[2324]: I0828 17:05:19.922576    2324 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0749a82-4329-4dc6-92f9-0bd490e250bc-kube-api-access-984v9" (OuterVolumeSpecName: "kube-api-access-984v9") pod "c0749a82-4329-4dc6-92f9-0bd490e250bc" (UID: "c0749a82-4329-4dc6-92f9-0bd490e250bc"). InnerVolumeSpecName "kube-api-access-984v9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:05:20 addons-161312 kubelet[2324]: I0828 17:05:20.023212    2324 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-984v9\" (UniqueName: \"kubernetes.io/projected/c0749a82-4329-4dc6-92f9-0bd490e250bc-kube-api-access-984v9\") on node \"addons-161312\" DevicePath \"\""
	Aug 28 17:05:20 addons-161312 kubelet[2324]: I0828 17:05:20.714887    2324 scope.go:117] "RemoveContainer" containerID="a69dc079b82a7807dcec21d632cabad0231019793363657acd1d80e02c11f849"
	Aug 28 17:05:21 addons-161312 kubelet[2324]: I0828 17:05:21.395592    2324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0749a82-4329-4dc6-92f9-0bd490e250bc" path="/var/lib/kubelet/pods/c0749a82-4329-4dc6-92f9-0bd490e250bc/volumes"
	Aug 28 17:05:21 addons-161312 kubelet[2324]: I0828 17:05:21.396059    2324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7dd58ff-e9b5-4511-9a22-023705b9fdfe" path="/var/lib/kubelet/pods/c7dd58ff-e9b5-4511-9a22-023705b9fdfe/volumes"
	
	
	==> storage-provisioner [43100bccad9b] <==
	I0828 16:52:43.132130       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 16:52:43.160672       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 16:52:43.160726       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 16:52:43.182372       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 16:52:43.184829       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-161312_8b13d2ad-9127-486d-9201-a9ba8289a776!
	I0828 16:52:43.185360       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"404dc3dd-1469-4d58-b8ff-a40d2e3414ce", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-161312_8b13d2ad-9127-486d-9201-a9ba8289a776 became leader
	I0828 16:52:43.285010       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-161312_8b13d2ad-9127-486d-9201-a9ba8289a776!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-161312 -n addons-161312
helpers_test.go:261: (dbg) Run:  kubectl --context addons-161312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-klgcd ingress-nginx-admission-patch-vlb58 helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-161312 describe pod busybox ingress-nginx-admission-create-klgcd ingress-nginx-admission-patch-vlb58 helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-161312 describe pod busybox ingress-nginx-admission-create-klgcd ingress-nginx-admission-patch-vlb58 helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7: exit status 1 (101.649814ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-161312/192.168.49.2
	Start Time:       Wed, 28 Aug 2024 16:56:04 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l5jhw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l5jhw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m17s                  default-scheduler  Successfully assigned default/busybox to addons-161312
	  Normal   Pulling    7m54s (x4 over 9m17s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m54s (x4 over 9m17s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m54s (x4 over 9m17s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m26s (x6 over 9m16s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m3s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-klgcd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vlb58" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-161312 describe pod busybox ingress-nginx-admission-create-klgcd ingress-nginx-admission-patch-vlb58 helper-pod-delete-pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.63s)

                                                
                                    

Test pass (318/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.5
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 7.22
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.24
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 57.41
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 225.26
29 TestAddons/serial/Volcano 41.23
31 TestAddons/serial/GCPAuth/Namespaces 0.18
34 TestAddons/parallel/Ingress 20.13
35 TestAddons/parallel/InspektorGadget 11.76
36 TestAddons/parallel/MetricsServer 5.71
39 TestAddons/parallel/CSI 44.95
40 TestAddons/parallel/Headlamp 15.67
41 TestAddons/parallel/CloudSpanner 5.52
42 TestAddons/parallel/LocalPath 52.2
43 TestAddons/parallel/NvidiaDevicePlugin 6.49
44 TestAddons/parallel/Yakd 11.71
45 TestAddons/StoppedEnableDisable 6.13
46 TestCertOptions 37.67
47 TestCertExpiration 245.44
48 TestDockerFlags 38.29
49 TestForceSystemdFlag 39.64
50 TestForceSystemdEnv 44.35
56 TestErrorSpam/setup 31.83
57 TestErrorSpam/start 0.72
58 TestErrorSpam/status 1.03
59 TestErrorSpam/pause 1.45
60 TestErrorSpam/unpause 1.54
61 TestErrorSpam/stop 2.09
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 75.29
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 32.62
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.24
73 TestFunctional/serial/CacheCmd/cache/add_local 0.95
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 45.32
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.13
84 TestFunctional/serial/LogsFileCmd 1.25
85 TestFunctional/serial/InvalidService 4.34
87 TestFunctional/parallel/ConfigCmd 0.46
88 TestFunctional/parallel/DashboardCmd 14.75
89 TestFunctional/parallel/DryRun 0.44
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.17
95 TestFunctional/parallel/ServiceCmdConnect 10.7
96 TestFunctional/parallel/AddonsCmd 0.18
97 TestFunctional/parallel/PersistentVolumeClaim 26.28
99 TestFunctional/parallel/SSHCmd 0.72
100 TestFunctional/parallel/CpCmd 2.37
102 TestFunctional/parallel/FileSync 0.38
103 TestFunctional/parallel/CertSync 2.21
107 TestFunctional/parallel/NodeLabels 0.11
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.37
111 TestFunctional/parallel/License 0.25
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.54
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.29
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
125 TestFunctional/parallel/ProfileCmd/profile_list 0.41
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
127 TestFunctional/parallel/ServiceCmd/List 0.65
128 TestFunctional/parallel/MountCmd/any-port 9.42
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
131 TestFunctional/parallel/ServiceCmd/Format 0.55
132 TestFunctional/parallel/ServiceCmd/URL 0.48
133 TestFunctional/parallel/MountCmd/specific-port 2.01
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.57
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.01
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.21
142 TestFunctional/parallel/ImageCommands/Setup 0.8
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.37
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.05
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
150 TestFunctional/parallel/DockerEnv/bash 1.36
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 126.27
161 TestMultiControlPlane/serial/DeployApp 45.61
162 TestMultiControlPlane/serial/PingHostFromPods 1.67
163 TestMultiControlPlane/serial/AddWorkerNode 26.66
164 TestMultiControlPlane/serial/NodeLabels 0.1
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
166 TestMultiControlPlane/serial/CopyFile 19.7
167 TestMultiControlPlane/serial/StopSecondaryNode 11.81
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.6
169 TestMultiControlPlane/serial/RestartSecondaryNode 66.55
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.76
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 181.07
172 TestMultiControlPlane/serial/DeleteSecondaryNode 12.03
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
174 TestMultiControlPlane/serial/StopCluster 32.86
175 TestMultiControlPlane/serial/RestartCluster 136.55
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.98
177 TestMultiControlPlane/serial/AddSecondaryNode 44.59
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.81
181 TestImageBuild/serial/Setup 35.44
182 TestImageBuild/serial/NormalBuild 1.87
183 TestImageBuild/serial/BuildWithBuildArg 1.09
184 TestImageBuild/serial/BuildWithDockerIgnore 0.93
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.79
189 TestJSONOutput/start/Command 72.91
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.62
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.53
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.9
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.22
214 TestKicCustomNetwork/create_custom_network 34.96
215 TestKicCustomNetwork/use_default_bridge_network 33.39
216 TestKicExistingNetwork 32.34
217 TestKicCustomSubnet 37.96
218 TestKicStaticIP 33.13
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 69.82
223 TestMountStart/serial/StartWithMountFirst 7.82
224 TestMountStart/serial/VerifyMountFirst 0.27
225 TestMountStart/serial/StartWithMountSecond 8.26
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.51
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.19
230 TestMountStart/serial/RestartStopped 8.39
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 86.92
235 TestMultiNode/serial/DeployApp2Nodes 49.58
236 TestMultiNode/serial/PingHostFrom2Pods 1.02
237 TestMultiNode/serial/AddNode 19.09
238 TestMultiNode/serial/MultiNodeLabels 0.11
239 TestMultiNode/serial/ProfileList 0.38
240 TestMultiNode/serial/CopyFile 10.16
241 TestMultiNode/serial/StopNode 2.24
242 TestMultiNode/serial/StartAfterStop 11.2
243 TestMultiNode/serial/RestartKeepsNodes 98.51
244 TestMultiNode/serial/DeleteNode 5.64
245 TestMultiNode/serial/StopMultiNode 21.58
246 TestMultiNode/serial/RestartMultiNode 61.45
247 TestMultiNode/serial/ValidateNameConflict 37.87
252 TestPreload 142.75
254 TestScheduledStopUnix 104.68
255 TestSkaffold 114.94
257 TestInsufficientStorage 11.5
258 TestRunningBinaryUpgrade 104.91
260 TestKubernetesUpgrade 372.66
261 TestMissingContainerUpgrade 176.94
263 TestPause/serial/Start 89.99
264 TestPause/serial/SecondStartNoReconfiguration 28.42
265 TestPause/serial/Pause 0.78
266 TestPause/serial/VerifyStatus 0.43
267 TestPause/serial/Unpause 0.64
268 TestPause/serial/PauseAgain 1.02
269 TestPause/serial/DeletePaused 2.32
270 TestPause/serial/VerifyDeletedResources 0.42
271 TestStoppedBinaryUpgrade/Setup 0.99
272 TestStoppedBinaryUpgrade/Upgrade 91.53
273 TestStoppedBinaryUpgrade/MinikubeLogs 1.69
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
283 TestNoKubernetes/serial/StartWithK8s 44.36
295 TestNoKubernetes/serial/StartWithStopK8s 18.76
296 TestNoKubernetes/serial/Start 13.11
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.57
298 TestNoKubernetes/serial/ProfileList 0.97
299 TestNoKubernetes/serial/Stop 1.25
300 TestNoKubernetes/serial/StartNoArgs 8.56
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
303 TestStartStop/group/old-k8s-version/serial/FirstStart 140.09
304 TestStartStop/group/old-k8s-version/serial/DeployApp 10.96
306 TestStartStop/group/no-preload/serial/FirstStart 63.34
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.92
308 TestStartStop/group/old-k8s-version/serial/Stop 12.02
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
310 TestStartStop/group/old-k8s-version/serial/SecondStart 378.73
311 TestStartStop/group/no-preload/serial/DeployApp 10.46
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
313 TestStartStop/group/no-preload/serial/Stop 10.88
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/no-preload/serial/SecondStart 266.48
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.13
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
319 TestStartStop/group/no-preload/serial/Pause 3.16
321 TestStartStop/group/embed-certs/serial/FirstStart 81.53
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
325 TestStartStop/group/old-k8s-version/serial/Pause 3.43
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.7
328 TestStartStop/group/embed-certs/serial/DeployApp 9.43
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.44
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
331 TestStartStop/group/embed-certs/serial/Stop 10.97
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.97
334 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
335 TestStartStop/group/embed-certs/serial/SecondStart 270.68
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 272.49
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
340 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
341 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
342 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
343 TestStartStop/group/embed-certs/serial/Pause 2.92
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.96
347 TestStartStop/group/newest-cni/serial/FirstStart 46.57
348 TestNetworkPlugins/group/auto/Start 80.14
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.3
351 TestStartStop/group/newest-cni/serial/Stop 11.07
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
353 TestStartStop/group/newest-cni/serial/SecondStart 19.07
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
357 TestStartStop/group/newest-cni/serial/Pause 3.69
358 TestNetworkPlugins/group/kindnet/Start 75.87
359 TestNetworkPlugins/group/auto/KubeletFlags 0.36
360 TestNetworkPlugins/group/auto/NetCatPod 13.43
361 TestNetworkPlugins/group/auto/DNS 0.24
362 TestNetworkPlugins/group/auto/Localhost 0.22
363 TestNetworkPlugins/group/auto/HairPin 0.25
364 TestNetworkPlugins/group/calico/Start 75.68
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
367 TestNetworkPlugins/group/kindnet/NetCatPod 11.33
368 TestNetworkPlugins/group/kindnet/DNS 0.26
369 TestNetworkPlugins/group/kindnet/Localhost 0.28
370 TestNetworkPlugins/group/kindnet/HairPin 0.27
371 TestNetworkPlugins/group/calico/ControllerPod 6.01
372 TestNetworkPlugins/group/custom-flannel/Start 62.55
373 TestNetworkPlugins/group/calico/KubeletFlags 0.38
374 TestNetworkPlugins/group/calico/NetCatPod 12.5
375 TestNetworkPlugins/group/calico/DNS 0.26
376 TestNetworkPlugins/group/calico/Localhost 0.21
377 TestNetworkPlugins/group/calico/HairPin 0.22
378 TestNetworkPlugins/group/false/Start 81.73
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.41
381 TestNetworkPlugins/group/custom-flannel/DNS 0.31
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.27
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
384 TestNetworkPlugins/group/enable-default-cni/Start 72.97
385 TestNetworkPlugins/group/false/KubeletFlags 0.4
386 TestNetworkPlugins/group/false/NetCatPod 12.36
387 TestNetworkPlugins/group/false/DNS 0.21
388 TestNetworkPlugins/group/false/Localhost 0.19
389 TestNetworkPlugins/group/false/HairPin 0.17
390 TestNetworkPlugins/group/flannel/Start 57.68
391 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
392 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.32
393 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
394 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
395 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
396 TestNetworkPlugins/group/bridge/Start 50.61
397 TestNetworkPlugins/group/flannel/ControllerPod 6.01
398 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
399 TestNetworkPlugins/group/flannel/NetCatPod 11.35
400 TestNetworkPlugins/group/flannel/DNS 0.28
401 TestNetworkPlugins/group/flannel/Localhost 0.15
402 TestNetworkPlugins/group/flannel/HairPin 0.17
403 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
404 TestNetworkPlugins/group/bridge/NetCatPod 13.38
405 TestNetworkPlugins/group/kubenet/Start 54.97
406 TestNetworkPlugins/group/bridge/DNS 0.24
407 TestNetworkPlugins/group/bridge/Localhost 0.2
408 TestNetworkPlugins/group/bridge/HairPin 0.19
409 TestNetworkPlugins/group/kubenet/KubeletFlags 0.36
410 TestNetworkPlugins/group/kubenet/NetCatPod 11.26
411 TestNetworkPlugins/group/kubenet/DNS 0.18
412 TestNetworkPlugins/group/kubenet/Localhost 0.15
413 TestNetworkPlugins/group/kubenet/HairPin 0.17
x
+
TestDownloadOnly/v1.20.0/json-events (12.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-224586 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-224586 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.50068855s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-224586
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-224586: exit status 85 (78.227504ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-224586 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |          |
	|         | -p download-only-224586        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 16:51:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 16:51:15.832070    7590 out.go:345] Setting OutFile to fd 1 ...
	I0828 16:51:15.832179    7590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:15.832189    7590 out.go:358] Setting ErrFile to fd 2...
	I0828 16:51:15.832194    7590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:15.832426    7590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
	W0828 16:51:15.832581    7590 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19529-2268/.minikube/config/config.json: open /home/jenkins/minikube-integration/19529-2268/.minikube/config/config.json: no such file or directory
	I0828 16:51:15.832959    7590 out.go:352] Setting JSON to true
	I0828 16:51:15.833714    7590 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2023,"bootTime":1724861853,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0828 16:51:15.833782    7590 start.go:139] virtualization:  
	I0828 16:51:15.836462    7590 out.go:97] [download-only-224586] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0828 16:51:15.836631    7590 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball: no such file or directory
	I0828 16:51:15.836681    7590 notify.go:220] Checking for updates...
	I0828 16:51:15.838104    7590 out.go:169] MINIKUBE_LOCATION=19529
	I0828 16:51:15.839852    7590 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 16:51:15.841538    7590 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig
	I0828 16:51:15.843113    7590 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube
	I0828 16:51:15.844881    7590 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0828 16:51:15.848015    7590 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0828 16:51:15.848293    7590 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 16:51:15.873020    7590 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 16:51:15.873120    7590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 16:51:16.211562    7590 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-28 16:51:16.201950966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 16:51:16.211710    7590 docker.go:307] overlay module found
	I0828 16:51:16.213778    7590 out.go:97] Using the docker driver based on user configuration
	I0828 16:51:16.213816    7590 start.go:297] selected driver: docker
	I0828 16:51:16.213824    7590 start.go:901] validating driver "docker" against <nil>
	I0828 16:51:16.213946    7590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 16:51:16.266327    7590 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-28 16:51:16.257475546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 16:51:16.266518    7590 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 16:51:16.266798    7590 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0828 16:51:16.266982    7590 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 16:51:16.268920    7590 out.go:169] Using Docker driver with root privileges
	I0828 16:51:16.270525    7590 cni.go:84] Creating CNI manager for ""
	I0828 16:51:16.270550    7590 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0828 16:51:16.270628    7590 start.go:340] cluster config:
	{Name:download-only-224586 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-224586 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:51:16.272408    7590 out.go:97] Starting "download-only-224586" primary control-plane node in "download-only-224586" cluster
	I0828 16:51:16.272435    7590 cache.go:121] Beginning downloading kic base image for docker with docker
	I0828 16:51:16.274156    7590 out.go:97] Pulling base image v0.0.44-1724775115-19521 ...
	I0828 16:51:16.274193    7590 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 16:51:16.274358    7590 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0828 16:51:16.290569    7590 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 16:51:16.290751    7590 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0828 16:51:16.290878    7590 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 16:51:16.337112    7590 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0828 16:51:16.337141    7590 cache.go:56] Caching tarball of preloaded images
	I0828 16:51:16.337284    7590 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 16:51:16.339445    7590 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0828 16:51:16.339472    7590 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 16:51:16.543885    7590 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0828 16:51:22.077758    7590 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 16:51:22.077894    7590 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 16:51:23.077827    7590 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0828 16:51:23.078221    7590 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/download-only-224586/config.json ...
	I0828 16:51:23.078257    7590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/download-only-224586/config.json: {Name:mk07370067f8d2177c1530ec5427c716788869e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:51:23.078439    7590 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 16:51:23.078605    7590 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19529-2268/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-224586 host does not exist
	  To start a cluster, run: "minikube start -p download-only-224586"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-224586
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-427986 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-427986 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.22237041s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-427986
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-427986: exit status 85 (74.905082ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-224586 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p download-only-224586        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p download-only-224586        | download-only-224586 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| start   | -o=json --download-only        | download-only-427986 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p download-only-427986        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 16:51:28
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 16:51:28.757060    7795 out.go:345] Setting OutFile to fd 1 ...
	I0828 16:51:28.757169    7795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:28.757182    7795 out.go:358] Setting ErrFile to fd 2...
	I0828 16:51:28.757187    7795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:28.757417    7795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
	I0828 16:51:28.757808    7795 out.go:352] Setting JSON to true
	I0828 16:51:28.758514    7795 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2036,"bootTime":1724861853,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0828 16:51:28.758583    7795 start.go:139] virtualization:  
	I0828 16:51:28.761865    7795 out.go:97] [download-only-427986] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0828 16:51:28.762065    7795 notify.go:220] Checking for updates...
	I0828 16:51:28.764694    7795 out.go:169] MINIKUBE_LOCATION=19529
	I0828 16:51:28.767474    7795 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 16:51:28.770032    7795 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig
	I0828 16:51:28.772730    7795 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube
	I0828 16:51:28.775468    7795 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0828 16:51:28.780915    7795 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0828 16:51:28.781173    7795 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 16:51:28.805005    7795 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 16:51:28.805114    7795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 16:51:28.883864    7795 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-28 16:51:28.874473424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 16:51:28.883988    7795 docker.go:307] overlay module found
	I0828 16:51:28.886779    7795 out.go:97] Using the docker driver based on user configuration
	I0828 16:51:28.886809    7795 start.go:297] selected driver: docker
	I0828 16:51:28.886818    7795 start.go:901] validating driver "docker" against <nil>
	I0828 16:51:28.886965    7795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 16:51:28.941297    7795 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-28 16:51:28.932125315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 16:51:28.941460    7795 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 16:51:28.941726    7795 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0828 16:51:28.941887    7795 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 16:51:28.944882    7795 out.go:169] Using Docker driver with root privileges
	I0828 16:51:28.947512    7795 cni.go:84] Creating CNI manager for ""
	I0828 16:51:28.947549    7795 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 16:51:28.947560    7795 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 16:51:28.947646    7795 start.go:340] cluster config:
	{Name:download-only-427986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-427986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:51:28.950568    7795 out.go:97] Starting "download-only-427986" primary control-plane node in "download-only-427986" cluster
	I0828 16:51:28.950600    7795 cache.go:121] Beginning downloading kic base image for docker with docker
	I0828 16:51:28.953454    7795 out.go:97] Pulling base image v0.0.44-1724775115-19521 ...
	I0828 16:51:28.953494    7795 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 16:51:28.953663    7795 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0828 16:51:28.969250    7795 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 16:51:28.969391    7795 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0828 16:51:28.969413    7795 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0828 16:51:28.969421    7795 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0828 16:51:28.969430    7795 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0828 16:51:29.019531    7795 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0828 16:51:29.019565    7795 cache.go:56] Caching tarball of preloaded images
	I0828 16:51:29.019737    7795 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 16:51:29.022684    7795 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0828 16:51:29.022718    7795 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0828 16:51:29.144654    7795 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /home/jenkins/minikube-integration/19529-2268/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-427986 host does not exist
	  To start a cluster, run: "minikube start -p download-only-427986"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-427986
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-834196 --alsologtostderr --binary-mirror http://127.0.0.1:40931 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-834196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-834196
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (57.41s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-414816 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-414816 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (55.23929705s)
helpers_test.go:175: Cleaning up "offline-docker-414816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-414816
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-414816: (2.172749059s)
--- PASS: TestOffline (57.41s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-161312
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-161312: exit status 85 (75.101496ms)

                                                
                                                
-- stdout --
	* Profile "addons-161312" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-161312"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-161312
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-161312: exit status 85 (78.588827ms)

                                                
                                                
-- stdout --
	* Profile "addons-161312" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-161312"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (225.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-161312 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-161312 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m45.262954639s)
--- PASS: TestAddons/Setup (225.26s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.23s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 45.055662ms
addons_test.go:897: volcano-scheduler stabilized in 45.523683ms
addons_test.go:905: volcano-admission stabilized in 45.693942ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-q26rb" [7a130b27-28a5-4ffa-be53-4762c2ecb49e] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00337438s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-ws9xv" [0fda139c-b08e-4079-be61-3bfff2acca6c] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004077535s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-8qqbc" [5d243b0d-4d4e-4930-9409-40787c540949] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003691517s
addons_test.go:932: (dbg) Run:  kubectl --context addons-161312 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-161312 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-161312 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [dbbdcfe8-dc8e-4ce6-babc-45e793f7d842] Pending
helpers_test.go:344: "test-job-nginx-0" [dbbdcfe8-dc8e-4ce6-babc-45e793f7d842] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [dbbdcfe8-dc8e-4ce6-babc-45e793f7d842] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004569975s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-161312 addons disable volcano --alsologtostderr -v=1: (10.577167377s)
--- PASS: TestAddons/serial/Volcano (41.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-161312 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-161312 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-161312 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-161312 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-161312 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9d464624-f878-4c5b-b418-71adfa662094] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9d464624-f878-4c5b-b418-71adfa662094] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004322502s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-161312 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-161312 addons disable ingress-dns --alsologtostderr -v=1: (1.680301874s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-161312 addons disable ingress --alsologtostderr -v=1: (7.69980322s)
--- PASS: TestAddons/parallel/Ingress (20.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ml8j2" [4fc7770b-22e9-4eaf-ae38-582c8667d859] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004528223s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-161312
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-161312: (5.754479562s)
--- PASS: TestAddons/parallel/InspektorGadget (11.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.092471ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-2gwmk" [dd1f5b27-27c7-4ddf-973e-855eb2bbbe37] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003876427s
addons_test.go:417: (dbg) Run:  kubectl --context addons-161312 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.279259ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-161312 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-161312 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0f10d350-d11f-43d0-9f73-0ad5157336a7] Pending
helpers_test.go:344: "task-pv-pod" [0f10d350-d11f-43d0-9f73-0ad5157336a7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0f10d350-d11f-43d0-9f73-0ad5157336a7] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004156708s
addons_test.go:590: (dbg) Run:  kubectl --context addons-161312 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-161312 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-161312 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-161312 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-161312 delete pod task-pv-pod: (1.136442828s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-161312 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-161312 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-161312 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3b5711de-70a7-4885-ab22-5a99c4d9544d] Pending
helpers_test.go:344: "task-pv-pod-restore" [3b5711de-70a7-4885-ab22-5a99c4d9544d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3b5711de-70a7-4885-ab22-5a99c4d9544d] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004372115s
addons_test.go:632: (dbg) Run:  kubectl --context addons-161312 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-161312 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-161312 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-161312 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.679441868s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-161312 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-7l9dc" [38d53655-848f-4160-adfc-ca8320a5b1c8] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-7l9dc" [38d53655-848f-4160-adfc-ca8320a5b1c8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-7l9dc" [38d53655-848f-4160-adfc-ca8320a5b1c8] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.00455794s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-161312 addons disable headlamp --alsologtostderr -v=1: (5.692630442s)
--- PASS: TestAddons/parallel/Headlamp (15.67s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-8spwt" [1cd2783f-85c3-468c-a6ea-ebc91760e9d5] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004276291s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-161312
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.2s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-161312 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-161312 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-161312 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [641ea3b4-9444-43cc-88b0-461a677bd1a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [641ea3b4-9444-43cc-88b0-461a677bd1a7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [641ea3b4-9444-43cc-88b0-461a677bd1a7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004070315s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-161312 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 ssh "cat /opt/local-path-provisioner/pvc-b8f607a9-8b55-4e8d-8fe4-271ae4fca8c7_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-161312 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-161312 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-161312 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.593208573s)
--- PASS: TestAddons/parallel/LocalPath (52.20s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lbb78" [4b16be02-3cce-4ec1-9435-fabfc1c55ab7] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004029497s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-161312
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-d42vd" [159aad12-f33f-4fea-92c1-9602ad6d4234] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004264615s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-161312 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-161312 addons disable yakd --alsologtostderr -v=1: (5.702660255s)
--- PASS: TestAddons/parallel/Yakd (11.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-161312
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-161312: (5.873891744s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-161312
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-161312
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-161312
--- PASS: TestAddons/StoppedEnableDisable (6.13s)

                                                
                                    
x
+
TestCertOptions (37.67s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-599822 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-599822 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (34.923391258s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-599822 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-599822 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-599822 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-599822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-599822
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-599822: (2.075250437s)
--- PASS: TestCertOptions (37.67s)

                                                
                                    
x
+
TestCertExpiration (245.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-177160 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-177160 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (38.317404013s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-177160 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-177160 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.955260248s)
helpers_test.go:175: Cleaning up "cert-expiration-177160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-177160
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-177160: (2.166083015s)
--- PASS: TestCertExpiration (245.44s)

                                                
                                    
x
+
TestDockerFlags (38.29s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-777688 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0828 17:52:15.333299    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-777688 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.45565713s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-777688 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-777688 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-777688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-777688
E0828 17:52:43.033759    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-777688: (2.178463463s)
--- PASS: TestDockerFlags (38.29s)

                                                
                                    
x
+
TestForceSystemdFlag (39.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-655280 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0828 17:49:59.192426    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:49:59.401370    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:50:23.161035    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-655280 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.178461527s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-655280 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-655280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-655280
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-655280: (2.070641755s)
--- PASS: TestForceSystemdFlag (39.64s)

                                                
                                    
x
+
TestForceSystemdEnv (44.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-496614 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-496614 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.765739295s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-496614 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-496614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-496614
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-496614: (2.257783349s)
--- PASS: TestForceSystemdEnv (44.35s)

                                                
                                    
x
+
TestErrorSpam/setup (31.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-743477 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-743477 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-743477 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-743477 --driver=docker  --container-runtime=docker: (31.827269302s)
--- PASS: TestErrorSpam/setup (31.83s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (1.03s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 status
--- PASS: TestErrorSpam/status (1.03s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (2.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 stop: (1.876860169s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-743477 --log_dir /tmp/nospam-743477 stop
--- PASS: TestErrorSpam/stop (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19529-2268/.minikube/files/etc/test/nested/copy/7584/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.29s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154367 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-154367 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m15.288650298s)
--- PASS: TestFunctional/serial/StartWithProxy (75.29s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.62s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154367 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-154367 --alsologtostderr -v=8: (32.622989644s)
functional_test.go:663: soft start took 32.624194143s for "functional-154367" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.62s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-154367 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-154367 cache add registry.k8s.io/pause:3.1: (1.024966542s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-154367 cache add registry.k8s.io/pause:3.3: (1.20082856s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-154367 cache add registry.k8s.io/pause:latest: (1.014093685s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-154367 /tmp/TestFunctionalserialCacheCmdcacheadd_local966072503/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 cache add minikube-local-cache-test:functional-154367
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 cache delete minikube-local-cache-test:functional-154367
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-154367
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154367 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.978401ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 kubectl -- --context functional-154367 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-154367 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154367 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-154367 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.312445826s)
functional_test.go:761: restart took 45.312555602s for "functional-154367" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-154367 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-154367 logs: (1.127529664s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 logs --file /tmp/TestFunctionalserialLogsFileCmd2034255619/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-154367 logs --file /tmp/TestFunctionalserialLogsFileCmd2034255619/001/logs.txt: (1.246567991s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-154367 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-154367
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-154367: exit status 115 (583.347707ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31560 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-154367 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154367 config get cpus: exit status 14 (65.643137ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154367 config get cpus: exit status 14 (86.396909ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-154367 --alsologtostderr -v=1]
E0828 17:10:33.416858    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-154367 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49359: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.75s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154367 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-154367 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (196.821512ms)

                                                
                                                
-- stdout --
	* [functional-154367] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:10:31.433697   49038 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:10:31.434179   49038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:10:31.434221   49038 out.go:358] Setting ErrFile to fd 2...
	I0828 17:10:31.434241   49038 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:10:31.434534   49038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
	I0828 17:10:31.434958   49038 out.go:352] Setting JSON to false
	I0828 17:10:31.435986   49038 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3179,"bootTime":1724861853,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0828 17:10:31.436095   49038 start.go:139] virtualization:  
	I0828 17:10:31.438765   49038 out.go:177] * [functional-154367] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0828 17:10:31.441417   49038 notify.go:220] Checking for updates...
	I0828 17:10:31.442526   49038 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:10:31.444958   49038 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:10:31.447994   49038 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig
	I0828 17:10:31.449807   49038 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube
	I0828 17:10:31.451495   49038 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0828 17:10:31.453398   49038 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:10:31.455574   49038 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 17:10:31.456227   49038 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:10:31.485013   49038 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 17:10:31.485136   49038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 17:10:31.557283   49038 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-28 17:10:31.546583213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 17:10:31.557398   49038 docker.go:307] overlay module found
	I0828 17:10:31.560268   49038 out.go:177] * Using the docker driver based on existing profile
	I0828 17:10:31.562119   49038 start.go:297] selected driver: docker
	I0828 17:10:31.562137   49038 start.go:901] validating driver "docker" against &{Name:functional-154367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-154367 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:10:31.562296   49038 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:10:31.564732   49038 out.go:201] 
	W0828 17:10:31.566541   49038 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0828 17:10:31.568166   49038 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154367 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-154367 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-154367 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (196.455026ms)

                                                
                                                
-- stdout --
	* [functional-154367] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:10:31.241617   48993 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:10:31.241731   48993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:10:31.241741   48993 out.go:358] Setting ErrFile to fd 2...
	I0828 17:10:31.241747   48993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:10:31.242128   48993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
	I0828 17:10:31.242526   48993 out.go:352] Setting JSON to false
	I0828 17:10:31.243608   48993 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3179,"bootTime":1724861853,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0828 17:10:31.243683   48993 start.go:139] virtualization:  
	I0828 17:10:31.246072   48993 out.go:177] * [functional-154367] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0828 17:10:31.248500   48993 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:10:31.248609   48993 notify.go:220] Checking for updates...
	I0828 17:10:31.252902   48993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:10:31.255056   48993 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig
	I0828 17:10:31.256865   48993 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube
	I0828 17:10:31.258757   48993 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0828 17:10:31.260413   48993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:10:31.262964   48993 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 17:10:31.263530   48993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:10:31.297500   48993 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0828 17:10:31.297619   48993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 17:10:31.358037   48993 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-28 17:10:31.34830976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 17:10:31.358155   48993 docker.go:307] overlay module found
	I0828 17:10:31.360559   48993 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0828 17:10:31.362359   48993 start.go:297] selected driver: docker
	I0828 17:10:31.362381   48993 start.go:901] validating driver "docker" against &{Name:functional-154367 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-154367 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:10:31.362495   48993 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:10:31.365806   48993 out.go:201] 
	W0828 17:10:31.367578   48993 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0828 17:10:31.369368   48993 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-154367 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-154367 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-xqzfp" [f6f55a48-f4b5-4fa5-abdd-c2580ef93b5c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-xqzfp" [f6f55a48-f4b5-4fa5-abdd-c2580ef93b5c] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003730577s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31791
functional_test.go:1675: http://192.168.49.2:31791: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-xqzfp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31791
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c62ac932-9253-481c-a6f0-f8d7535d44f7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003468898s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-154367 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-154367 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-154367 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-154367 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9fb5a02a-6334-44cb-b57d-5cdc68e73143] Pending
helpers_test.go:344: "sp-pod" [9fb5a02a-6334-44cb-b57d-5cdc68e73143] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9fb5a02a-6334-44cb-b57d-5cdc68e73143] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004701482s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-154367 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-154367 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-154367 delete -f testdata/storage-provisioner/pod.yaml: (1.216165856s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-154367 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6547f6fb-62ae-4af6-a212-efea4e7dfcec] Pending
helpers_test.go:344: "sp-pod" [6547f6fb-62ae-4af6-a212-efea4e7dfcec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6547f6fb-62ae-4af6-a212-efea4e7dfcec] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004150549s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-154367 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.28s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh -n functional-154367 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 cp functional-154367:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2448585441/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh -n functional-154367 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh -n functional-154367 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7584/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "sudo cat /etc/test/nested/copy/7584/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7584.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "sudo cat /etc/ssl/certs/7584.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7584.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "sudo cat /usr/share/ca-certificates/7584.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "sudo cat /etc/ssl/certs/75842.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "sudo cat /usr/share/ca-certificates/75842.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-154367 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154367 ssh "sudo systemctl is-active crio": exit status 1 (373.607618ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-154367 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-154367 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-154367 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 46373: os: process already finished
helpers_test.go:502: unable to terminate pid 46176: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-154367 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-154367 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-154367 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4930939d-7616-4754-a6c3-f441bb18604d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4930939d-7616-4754-a6c3-f441bb18604d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004113948s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-154367 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.63.46 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-154367 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-154367 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-154367 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-pjqzw" [06444c30-ce8f-4351-bbe8-2f2dfe1fea21] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-pjqzw" [06444c30-ce8f-4351-bbe8-2f2dfe1fea21] Running
E0828 17:10:23.161839    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:10:23.168779    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:10:23.180233    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:10:23.201887    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:10:23.243328    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:10:23.325594    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:10:23.487136    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:10:23.808845    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:10:24.450437    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:10:25.731845    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003806935s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "328.56504ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "81.011181ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "343.072392ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "72.090976ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-154367 /tmp/TestFunctionalparallelMountCmdany-port3623050566/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724865027480473316" to /tmp/TestFunctionalparallelMountCmdany-port3623050566/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724865027480473316" to /tmp/TestFunctionalparallelMountCmdany-port3623050566/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724865027480473316" to /tmp/TestFunctionalparallelMountCmdany-port3623050566/001/test-1724865027480473316
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154367 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (394.328968ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "findmnt -T /mount-9p | grep 9p"
E0828 17:10:28.295527    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 28 17:10 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 28 17:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 28 17:10 test-1724865027480473316
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh cat /mount-9p/test-1724865027480473316
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-154367 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ff2cd33f-21cd-4157-aa08-38b24c311594] Pending
helpers_test.go:344: "busybox-mount" [ff2cd33f-21cd-4157-aa08-38b24c311594] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ff2cd33f-21cd-4157-aa08-38b24c311594] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ff2cd33f-21cd-4157-aa08-38b24c311594] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004147538s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-154367 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154367 /tmp/TestFunctionalparallelMountCmdany-port3623050566/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 service list -o json
functional_test.go:1494: Took "529.388596ms" to run "out/minikube-linux-arm64 -p functional-154367 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31598
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31598
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-154367 /tmp/TestFunctionalparallelMountCmdspecific-port4231729074/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154367 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (474.252375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154367 /tmp/TestFunctionalparallelMountCmdspecific-port4231729074/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154367 ssh "sudo umount -f /mount-9p": exit status 1 (298.96377ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-154367 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154367 /tmp/TestFunctionalparallelMountCmdspecific-port4231729074/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-154367 /tmp/TestFunctionalparallelMountCmdVerifyCleanup837785975/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-154367 /tmp/TestFunctionalparallelMountCmdVerifyCleanup837785975/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-154367 /tmp/TestFunctionalparallelMountCmdVerifyCleanup837785975/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154367 ssh "findmnt -T" /mount1: exit status 1 (1.077418942s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-154367 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154367 /tmp/TestFunctionalparallelMountCmdVerifyCleanup837785975/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154367 /tmp/TestFunctionalparallelMountCmdVerifyCleanup837785975/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-154367 /tmp/TestFunctionalparallelMountCmdVerifyCleanup837785975/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-154367 version -o=json --components: (1.014618578s)
--- PASS: TestFunctional/parallel/Version/components (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-154367 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-154367
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-154367
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-154367 image ls --format short --alsologtostderr:
I0828 17:10:50.245300   52144 out.go:345] Setting OutFile to fd 1 ...
I0828 17:10:50.245471   52144 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:10:50.245479   52144 out.go:358] Setting ErrFile to fd 2...
I0828 17:10:50.245491   52144 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:10:50.245752   52144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
I0828 17:10:50.246421   52144 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 17:10:50.246551   52144 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 17:10:50.247044   52144 cli_runner.go:164] Run: docker container inspect functional-154367 --format={{.State.Status}}
I0828 17:10:50.266230   52144 ssh_runner.go:195] Run: systemctl --version
I0828 17:10:50.266290   52144 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154367
I0828 17:10:50.310297   52144 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/functional-154367/id_rsa Username:docker}
I0828 17:10:50.412878   52144 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-154367 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-154367 | 883db8d832d27 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| docker.io/kicbase/echo-server               | functional-154367 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-154367 image ls --format table --alsologtostderr:
I0828 17:10:51.132176   52383 out.go:345] Setting OutFile to fd 1 ...
I0828 17:10:51.132450   52383 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:10:51.132477   52383 out.go:358] Setting ErrFile to fd 2...
I0828 17:10:51.132606   52383 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:10:51.133038   52383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
I0828 17:10:51.133993   52383 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 17:10:51.134233   52383 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 17:10:51.135080   52383 cli_runner.go:164] Run: docker container inspect functional-154367 --format={{.State.Status}}
I0828 17:10:51.166866   52383 ssh_runner.go:195] Run: systemctl --version
I0828 17:10:51.166928   52383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154367
I0828 17:10:51.193115   52383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/functional-154367/id_rsa Username:docker}
I0828 17:10:51.288147   52383 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-154367 image ls --format json --alsologtostderr:
[{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f8
49d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"883db8d832d27047c81e130f92c6792d4fc15e81cf98db0ccc5e9007444fedab","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-154367"],"size":"30"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":
[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-154367"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"
size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-154367 image ls --format json --alsologtostderr:
I0828 17:10:50.844898   52324 out.go:345] Setting OutFile to fd 1 ...
I0828 17:10:50.845132   52324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:10:50.845163   52324 out.go:358] Setting ErrFile to fd 2...
I0828 17:10:50.845185   52324 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:10:50.845467   52324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
I0828 17:10:50.846126   52324 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 17:10:50.846310   52324 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 17:10:50.846874   52324 cli_runner.go:164] Run: docker container inspect functional-154367 --format={{.State.Status}}
I0828 17:10:50.868776   52324 ssh_runner.go:195] Run: systemctl --version
I0828 17:10:50.868835   52324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154367
I0828 17:10:50.885861   52324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/functional-154367/id_rsa Username:docker}
I0828 17:10:50.995976   52324 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-154367 image ls --format yaml --alsologtostderr:
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-154367
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 883db8d832d27047c81e130f92c6792d4fc15e81cf98db0ccc5e9007444fedab
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-154367
size: "30"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-154367 image ls --format yaml --alsologtostderr:
I0828 17:10:50.597736   52240 out.go:345] Setting OutFile to fd 1 ...
I0828 17:10:50.597866   52240 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:10:50.597876   52240 out.go:358] Setting ErrFile to fd 2...
I0828 17:10:50.597882   52240 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:10:50.598151   52240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
I0828 17:10:50.598802   52240 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 17:10:50.598940   52240 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 17:10:50.599534   52240 cli_runner.go:164] Run: docker container inspect functional-154367 --format={{.State.Status}}
I0828 17:10:50.618103   52240 ssh_runner.go:195] Run: systemctl --version
I0828 17:10:50.618265   52240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154367
I0828 17:10:50.644167   52240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/functional-154367/id_rsa Username:docker}
I0828 17:10:50.735934   52240 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-154367 ssh pgrep buildkitd: exit status 1 (335.360965ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image build -t localhost/my-image:functional-154367 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-154367 image build -t localhost/my-image:functional-154367 testdata/build --alsologtostderr: (2.666934769s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-154367 image build -t localhost/my-image:functional-154367 testdata/build --alsologtostderr:
I0828 17:10:50.861200   52329 out.go:345] Setting OutFile to fd 1 ...
I0828 17:10:50.862595   52329 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:10:50.862613   52329 out.go:358] Setting ErrFile to fd 2...
I0828 17:10:50.862626   52329 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:10:50.862980   52329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
I0828 17:10:50.864326   52329 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 17:10:50.867520   52329 config.go:182] Loaded profile config "functional-154367": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 17:10:50.868135   52329 cli_runner.go:164] Run: docker container inspect functional-154367 --format={{.State.Status}}
I0828 17:10:50.891797   52329 ssh_runner.go:195] Run: systemctl --version
I0828 17:10:50.891861   52329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-154367
I0828 17:10:50.920198   52329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/functional-154367/id_rsa Username:docker}
I0828 17:10:51.023969   52329 build_images.go:161] Building image from path: /tmp/build.3891889765.tar
I0828 17:10:51.024045   52329 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0828 17:10:51.065815   52329 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3891889765.tar
I0828 17:10:51.070003   52329 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3891889765.tar: stat -c "%s %y" /var/lib/minikube/build/build.3891889765.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3891889765.tar': No such file or directory
I0828 17:10:51.070042   52329 ssh_runner.go:362] scp /tmp/build.3891889765.tar --> /var/lib/minikube/build/build.3891889765.tar (3072 bytes)
I0828 17:10:51.101881   52329 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3891889765
I0828 17:10:51.112856   52329 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3891889765 -xf /var/lib/minikube/build/build.3891889765.tar
I0828 17:10:51.125364   52329 docker.go:360] Building image: /var/lib/minikube/build/build.3891889765
I0828 17:10:51.125450   52329 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-154367 /var/lib/minikube/build/build.3891889765
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:700bbd80e876b115b272b2a9892848fe25ae5d49af4621528d612f50ffa20031 done
#8 naming to localhost/my-image:functional-154367 done
#8 DONE 0.0s
I0828 17:10:53.424978   52329 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-154367 /var/lib/minikube/build/build.3891889765: (2.299502577s)
I0828 17:10:53.425065   52329 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3891889765
I0828 17:10:53.434636   52329 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3891889765.tar
I0828 17:10:53.443789   52329 build_images.go:217] Built localhost/my-image:functional-154367 from /tmp/build.3891889765.tar
I0828 17:10:53.443821   52329 build_images.go:133] succeeded building to: functional-154367
I0828 17:10:53.443834   52329 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-154367
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image load --daemon kicbase/echo-server:functional-154367 --alsologtostderr
E0828 17:10:43.658825    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-154367 image load --daemon kicbase/echo-server:functional-154367 --alsologtostderr: (1.114399669s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image load --daemon kicbase/echo-server:functional-154367 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-154367
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image load --daemon kicbase/echo-server:functional-154367 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image save kicbase/echo-server:functional-154367 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
2024/08/28 17:10:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-154367 docker-env) && out/minikube-linux-arm64 status -p functional-154367"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-154367 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image rm kicbase/echo-server:functional-154367 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-154367
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-154367 image save --daemon kicbase/echo-server:functional-154367 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-154367
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-154367
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-154367
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-154367
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (126.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-463241 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0828 17:11:04.140962    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:11:45.102749    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-463241 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m5.346383056s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (126.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (45.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- rollout status deployment/busybox
E0828 17:13:07.024663    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-463241 -- rollout status deployment/busybox: (5.156909034s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-kkl4j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-px5k9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-x72wf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-kkl4j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-px5k9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-x72wf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-kkl4j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-px5k9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-x72wf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (45.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-kkl4j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-kkl4j -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-px5k9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-px5k9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-x72wf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-463241 -- exec busybox-7dff88458-x72wf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-463241 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-463241 -v=7 --alsologtostderr: (25.655631413s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr: (1.001113181s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-463241 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-463241 status --output json -v=7 --alsologtostderr: (1.02393665s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp testdata/cp-test.txt ha-463241:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile874070070/001/cp-test_ha-463241.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241:/home/docker/cp-test.txt ha-463241-m02:/home/docker/cp-test_ha-463241_ha-463241-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m02 "sudo cat /home/docker/cp-test_ha-463241_ha-463241-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241:/home/docker/cp-test.txt ha-463241-m03:/home/docker/cp-test_ha-463241_ha-463241-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m03 "sudo cat /home/docker/cp-test_ha-463241_ha-463241-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241:/home/docker/cp-test.txt ha-463241-m04:/home/docker/cp-test_ha-463241_ha-463241-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m04 "sudo cat /home/docker/cp-test_ha-463241_ha-463241-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp testdata/cp-test.txt ha-463241-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile874070070/001/cp-test_ha-463241-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m02:/home/docker/cp-test.txt ha-463241:/home/docker/cp-test_ha-463241-m02_ha-463241.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241 "sudo cat /home/docker/cp-test_ha-463241-m02_ha-463241.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m02:/home/docker/cp-test.txt ha-463241-m03:/home/docker/cp-test_ha-463241-m02_ha-463241-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m03 "sudo cat /home/docker/cp-test_ha-463241-m02_ha-463241-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m02:/home/docker/cp-test.txt ha-463241-m04:/home/docker/cp-test_ha-463241-m02_ha-463241-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m04 "sudo cat /home/docker/cp-test_ha-463241-m02_ha-463241-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp testdata/cp-test.txt ha-463241-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile874070070/001/cp-test_ha-463241-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m03:/home/docker/cp-test.txt ha-463241:/home/docker/cp-test_ha-463241-m03_ha-463241.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241 "sudo cat /home/docker/cp-test_ha-463241-m03_ha-463241.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m03:/home/docker/cp-test.txt ha-463241-m02:/home/docker/cp-test_ha-463241-m03_ha-463241-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m02 "sudo cat /home/docker/cp-test_ha-463241-m03_ha-463241-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m03:/home/docker/cp-test.txt ha-463241-m04:/home/docker/cp-test_ha-463241-m03_ha-463241-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m04 "sudo cat /home/docker/cp-test_ha-463241-m03_ha-463241-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp testdata/cp-test.txt ha-463241-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile874070070/001/cp-test_ha-463241-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m04:/home/docker/cp-test.txt ha-463241:/home/docker/cp-test_ha-463241-m04_ha-463241.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241 "sudo cat /home/docker/cp-test_ha-463241-m04_ha-463241.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m04:/home/docker/cp-test.txt ha-463241-m02:/home/docker/cp-test_ha-463241-m04_ha-463241-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m02 "sudo cat /home/docker/cp-test_ha-463241-m04_ha-463241-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 cp ha-463241-m04:/home/docker/cp-test.txt ha-463241-m03:/home/docker/cp-test_ha-463241-m04_ha-463241-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 ssh -n ha-463241-m03 "sudo cat /home/docker/cp-test_ha-463241-m04_ha-463241-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-463241 node stop m02 -v=7 --alsologtostderr: (11.018632941s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr: exit status 7 (794.44231ms)

                                                
                                                
-- stdout --
	ha-463241
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-463241-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-463241-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-463241-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:14:48.173420   75139 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:14:48.173619   75139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:14:48.173648   75139 out.go:358] Setting ErrFile to fd 2...
	I0828 17:14:48.173669   75139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:14:48.173951   75139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
	I0828 17:14:48.174172   75139 out.go:352] Setting JSON to false
	I0828 17:14:48.174244   75139 mustload.go:65] Loading cluster: ha-463241
	I0828 17:14:48.174324   75139 notify.go:220] Checking for updates...
	I0828 17:14:48.174770   75139 config.go:182] Loaded profile config "ha-463241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 17:14:48.175154   75139 status.go:255] checking status of ha-463241 ...
	I0828 17:14:48.175978   75139 cli_runner.go:164] Run: docker container inspect ha-463241 --format={{.State.Status}}
	I0828 17:14:48.195531   75139 status.go:330] ha-463241 host status = "Running" (err=<nil>)
	I0828 17:14:48.195559   75139 host.go:66] Checking if "ha-463241" exists ...
	I0828 17:14:48.195937   75139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-463241
	I0828 17:14:48.222727   75139 host.go:66] Checking if "ha-463241" exists ...
	I0828 17:14:48.223041   75139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:14:48.223091   75139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-463241
	I0828 17:14:48.247472   75139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/ha-463241/id_rsa Username:docker}
	I0828 17:14:48.345185   75139 ssh_runner.go:195] Run: systemctl --version
	I0828 17:14:48.351236   75139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:14:48.369456   75139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 17:14:48.445710   75139 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-28 17:14:48.435648801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 17:14:48.446327   75139 kubeconfig.go:125] found "ha-463241" server: "https://192.168.49.254:8443"
	I0828 17:14:48.446353   75139 api_server.go:166] Checking apiserver status ...
	I0828 17:14:48.446395   75139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:14:48.458937   75139 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0828 17:14:48.469123   75139 api_server.go:182] apiserver freezer: "8:freezer:/docker/948c48e42f0acae4915c28a442c90b2881152fd117917caf93d8463d1b0a4507/kubepods/burstable/pod87163e9b5462267d99ec89fcc03ffd4a/537cc674157b78161d9cf876390f5e9ad1b1037d9a1ea05ff6016f33f61145a8"
	I0828 17:14:48.469200   75139 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/948c48e42f0acae4915c28a442c90b2881152fd117917caf93d8463d1b0a4507/kubepods/burstable/pod87163e9b5462267d99ec89fcc03ffd4a/537cc674157b78161d9cf876390f5e9ad1b1037d9a1ea05ff6016f33f61145a8/freezer.state
	I0828 17:14:48.479559   75139 api_server.go:204] freezer state: "THAWED"
	I0828 17:14:48.479627   75139 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0828 17:14:48.487584   75139 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0828 17:14:48.487657   75139 status.go:422] ha-463241 apiserver status = Running (err=<nil>)
	I0828 17:14:48.487669   75139 status.go:257] ha-463241 status: &{Name:ha-463241 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:14:48.487686   75139 status.go:255] checking status of ha-463241-m02 ...
	I0828 17:14:48.487996   75139 cli_runner.go:164] Run: docker container inspect ha-463241-m02 --format={{.State.Status}}
	I0828 17:14:48.508403   75139 status.go:330] ha-463241-m02 host status = "Stopped" (err=<nil>)
	I0828 17:14:48.508428   75139 status.go:343] host is not running, skipping remaining checks
	I0828 17:14:48.508435   75139 status.go:257] ha-463241-m02 status: &{Name:ha-463241-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:14:48.508454   75139 status.go:255] checking status of ha-463241-m03 ...
	I0828 17:14:48.508760   75139 cli_runner.go:164] Run: docker container inspect ha-463241-m03 --format={{.State.Status}}
	I0828 17:14:48.526808   75139 status.go:330] ha-463241-m03 host status = "Running" (err=<nil>)
	I0828 17:14:48.526835   75139 host.go:66] Checking if "ha-463241-m03" exists ...
	I0828 17:14:48.527150   75139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-463241-m03
	I0828 17:14:48.545537   75139 host.go:66] Checking if "ha-463241-m03" exists ...
	I0828 17:14:48.545842   75139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:14:48.546179   75139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-463241-m03
	I0828 17:14:48.562936   75139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/ha-463241-m03/id_rsa Username:docker}
	I0828 17:14:48.661986   75139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:14:48.689680   75139 kubeconfig.go:125] found "ha-463241" server: "https://192.168.49.254:8443"
	I0828 17:14:48.689717   75139 api_server.go:166] Checking apiserver status ...
	I0828 17:14:48.689761   75139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:14:48.703328   75139 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2189/cgroup
	I0828 17:14:48.713429   75139 api_server.go:182] apiserver freezer: "8:freezer:/docker/70044a93b79e6c695ee8922f63fa4308f6f7e4c9bf5e8b476fdf42f1c55bcaea/kubepods/burstable/podf6f0942f2918fe37131e1ff6be221265/e7be43936fa324356ae7464adaab8fd8279937649d913173afafa0849e647150"
	I0828 17:14:48.713530   75139 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/70044a93b79e6c695ee8922f63fa4308f6f7e4c9bf5e8b476fdf42f1c55bcaea/kubepods/burstable/podf6f0942f2918fe37131e1ff6be221265/e7be43936fa324356ae7464adaab8fd8279937649d913173afafa0849e647150/freezer.state
	I0828 17:14:48.723258   75139 api_server.go:204] freezer state: "THAWED"
	I0828 17:14:48.723309   75139 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0828 17:14:48.731546   75139 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0828 17:14:48.731576   75139 status.go:422] ha-463241-m03 apiserver status = Running (err=<nil>)
	I0828 17:14:48.731586   75139 status.go:257] ha-463241-m03 status: &{Name:ha-463241-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:14:48.731609   75139 status.go:255] checking status of ha-463241-m04 ...
	I0828 17:14:48.731925   75139 cli_runner.go:164] Run: docker container inspect ha-463241-m04 --format={{.State.Status}}
	I0828 17:14:48.748257   75139 status.go:330] ha-463241-m04 host status = "Running" (err=<nil>)
	I0828 17:14:48.748280   75139 host.go:66] Checking if "ha-463241-m04" exists ...
	I0828 17:14:48.748586   75139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-463241-m04
	I0828 17:14:48.767028   75139 host.go:66] Checking if "ha-463241-m04" exists ...
	I0828 17:14:48.767555   75139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:14:48.767609   75139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-463241-m04
	I0828 17:14:48.797690   75139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/ha-463241-m04/id_rsa Username:docker}
	I0828 17:14:48.896358   75139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:14:48.908028   75139 status.go:257] ha-463241-m04 status: &{Name:ha-463241-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (66.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 node start m02 -v=7 --alsologtostderr
E0828 17:14:59.400726    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:59.407161    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:59.418516    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:59.439915    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:59.481288    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:59.562772    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:59.724221    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:15:00.048131    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:15:00.692741    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:15:01.974174    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:15:04.536162    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:15:09.657918    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:15:19.899269    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:15:23.161664    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:15:40.381636    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:15:50.867020    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-463241 node start m02 -v=7 --alsologtostderr: (1m5.397438244s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr: (1.030364667s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (66.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (181.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-463241 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-463241 -v=7 --alsologtostderr
E0828 17:16:21.343123    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-463241 -v=7 --alsologtostderr: (34.135877559s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-463241 --wait=true -v=7 --alsologtostderr
E0828 17:17:43.266598    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-463241 --wait=true -v=7 --alsologtostderr: (2m26.781068887s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-463241
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (181.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-463241 node delete m03 -v=7 --alsologtostderr: (11.039364882s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-463241 stop -v=7 --alsologtostderr: (32.756088017s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr: exit status 7 (105.370756ms)

                                                
                                                
-- stdout --
	ha-463241
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-463241-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-463241-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:19:43.337607  101845 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:19:43.337814  101845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:19:43.337840  101845 out.go:358] Setting ErrFile to fd 2...
	I0828 17:19:43.337858  101845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:19:43.338131  101845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
	I0828 17:19:43.338344  101845 out.go:352] Setting JSON to false
	I0828 17:19:43.338415  101845 mustload.go:65] Loading cluster: ha-463241
	I0828 17:19:43.338495  101845 notify.go:220] Checking for updates...
	I0828 17:19:43.338920  101845 config.go:182] Loaded profile config "ha-463241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 17:19:43.338952  101845 status.go:255] checking status of ha-463241 ...
	I0828 17:19:43.339500  101845 cli_runner.go:164] Run: docker container inspect ha-463241 --format={{.State.Status}}
	I0828 17:19:43.358449  101845 status.go:330] ha-463241 host status = "Stopped" (err=<nil>)
	I0828 17:19:43.358470  101845 status.go:343] host is not running, skipping remaining checks
	I0828 17:19:43.358478  101845 status.go:257] ha-463241 status: &{Name:ha-463241 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:19:43.358510  101845 status.go:255] checking status of ha-463241-m02 ...
	I0828 17:19:43.358829  101845 cli_runner.go:164] Run: docker container inspect ha-463241-m02 --format={{.State.Status}}
	I0828 17:19:43.382850  101845 status.go:330] ha-463241-m02 host status = "Stopped" (err=<nil>)
	I0828 17:19:43.382870  101845 status.go:343] host is not running, skipping remaining checks
	I0828 17:19:43.382877  101845 status.go:257] ha-463241-m02 status: &{Name:ha-463241-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:19:43.382896  101845 status.go:255] checking status of ha-463241-m04 ...
	I0828 17:19:43.383199  101845 cli_runner.go:164] Run: docker container inspect ha-463241-m04 --format={{.State.Status}}
	I0828 17:19:43.400302  101845 status.go:330] ha-463241-m04 host status = "Stopped" (err=<nil>)
	I0828 17:19:43.400322  101845 status.go:343] host is not running, skipping remaining checks
	I0828 17:19:43.400329  101845 status.go:257] ha-463241-m04 status: &{Name:ha-463241-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (136.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-463241 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0828 17:19:59.400768    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:20:23.161863    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:20:27.107921    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-463241 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m15.497078924s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (136.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-463241 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-463241 --control-plane -v=7 --alsologtostderr: (43.346236113s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-463241 status -v=7 --alsologtostderr: (1.24812073s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (35.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-163111 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-163111 --driver=docker  --container-runtime=docker: (35.43941818s)
--- PASS: TestImageBuild/serial/Setup (35.44s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-163111
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-163111: (1.871031037s)
--- PASS: TestImageBuild/serial/NormalBuild (1.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-163111
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-163111: (1.087532457s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.09s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-163111
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-163111
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (72.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-263145 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-263145 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m12.910623988s)
--- PASS: TestJSONOutput/start/Command (72.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-263145 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-263145 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-263145 --output=json --user=testUser
E0828 17:24:59.401557    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-263145 --output=json --user=testUser: (10.898937767s)
--- PASS: TestJSONOutput/stop/Command (10.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-965513 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-965513 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.072219ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"56ffde03-70d6-4edd-bb8e-f8d9b6a0218b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-965513] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"35bd1f42-9c67-459f-a6df-646c7da87e28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19529"}}
	{"specversion":"1.0","id":"c1a49d19-83ed-481d-b1c6-e0ee1c25447e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c2dec25d-2761-4fb8-a645-f9a864d726fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig"}}
	{"specversion":"1.0","id":"b13fb0a2-49a6-41fa-b6f8-56c893555402","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube"}}
	{"specversion":"1.0","id":"b64b3457-2821-45b7-b88f-0529c833fe9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a6645762-f5d4-4e6c-8a52-0cc193af27c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9c9910bd-c11e-4055-83b2-d0d1309f7649","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-965513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-965513
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-448406 --network=
E0828 17:25:23.161361    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-448406 --network=: (32.696205459s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-448406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-448406
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-448406: (2.242942963s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.96s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-719079 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-719079 --network=bridge: (31.327874943s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-719079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-719079
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-719079: (2.039872221s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.39s)

                                                
                                    
x
+
TestKicExistingNetwork (32.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-291803 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-291803 --network=existing-network: (30.245346148s)
helpers_test.go:175: Cleaning up "existing-network-291803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-291803
E0828 17:26:46.228550    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-291803: (1.941823586s)
--- PASS: TestKicExistingNetwork (32.34s)

                                                
                                    
x
+
TestKicCustomSubnet (37.96s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-849645 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-849645 --subnet=192.168.60.0/24: (35.814584612s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-849645 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-849645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-849645
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-849645: (2.118776465s)
--- PASS: TestKicCustomSubnet (37.96s)

                                                
                                    
x
+
TestKicStaticIP (33.13s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-723886 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-723886 --static-ip=192.168.200.200: (30.899044411s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-723886 ip
helpers_test.go:175: Cleaning up "static-ip-723886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-723886
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-723886: (2.086759981s)
--- PASS: TestKicStaticIP (33.13s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-562040 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-562040 --driver=docker  --container-runtime=docker: (31.412266231s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-564682 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-564682 --driver=docker  --container-runtime=docker: (32.686733883s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-562040
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-564682
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-564682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-564682
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-564682: (2.235911343s)
helpers_test.go:175: Cleaning up "first-562040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-562040
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-562040: (2.184251291s)
--- PASS: TestMinikubeProfile (69.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-656450 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-656450 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.819512844s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-656450 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-668625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-668625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.262862649s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-668625 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.51s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-656450 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-656450 --alsologtostderr -v=5: (1.507614688s)
--- PASS: TestMountStart/serial/DeleteFirst (1.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-668625 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-668625
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-668625: (1.19087927s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-668625
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-668625: (7.391647466s)
--- PASS: TestMountStart/serial/RestartStopped (8.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-668625 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-144344 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0828 17:29:59.401482    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:30:23.161819    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-144344 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m26.31327105s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (49.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-144344 -- rollout status deployment/busybox: (4.915802589s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0828 17:31:22.469755    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- exec busybox-7dff88458-sjgvk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- exec busybox-7dff88458-znntv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- exec busybox-7dff88458-sjgvk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- exec busybox-7dff88458-znntv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- exec busybox-7dff88458-sjgvk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- exec busybox-7dff88458-znntv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (49.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- exec busybox-7dff88458-sjgvk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- exec busybox-7dff88458-sjgvk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- exec busybox-7dff88458-znntv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-144344 -- exec busybox-7dff88458-znntv -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-144344 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-144344 -v 3 --alsologtostderr: (18.293951431s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-144344 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp testdata/cp-test.txt multinode-144344:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp multinode-144344:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1737758586/001/cp-test_multinode-144344.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp multinode-144344:/home/docker/cp-test.txt multinode-144344-m02:/home/docker/cp-test_multinode-144344_multinode-144344-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m02 "sudo cat /home/docker/cp-test_multinode-144344_multinode-144344-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp multinode-144344:/home/docker/cp-test.txt multinode-144344-m03:/home/docker/cp-test_multinode-144344_multinode-144344-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m03 "sudo cat /home/docker/cp-test_multinode-144344_multinode-144344-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp testdata/cp-test.txt multinode-144344-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp multinode-144344-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1737758586/001/cp-test_multinode-144344-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp multinode-144344-m02:/home/docker/cp-test.txt multinode-144344:/home/docker/cp-test_multinode-144344-m02_multinode-144344.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344 "sudo cat /home/docker/cp-test_multinode-144344-m02_multinode-144344.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp multinode-144344-m02:/home/docker/cp-test.txt multinode-144344-m03:/home/docker/cp-test_multinode-144344-m02_multinode-144344-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m03 "sudo cat /home/docker/cp-test_multinode-144344-m02_multinode-144344-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp testdata/cp-test.txt multinode-144344-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp multinode-144344-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1737758586/001/cp-test_multinode-144344-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp multinode-144344-m03:/home/docker/cp-test.txt multinode-144344:/home/docker/cp-test_multinode-144344-m03_multinode-144344.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344 "sudo cat /home/docker/cp-test_multinode-144344-m03_multinode-144344.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 cp multinode-144344-m03:/home/docker/cp-test.txt multinode-144344-m02:/home/docker/cp-test_multinode-144344-m03_multinode-144344-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 ssh -n multinode-144344-m02 "sudo cat /home/docker/cp-test_multinode-144344-m03_multinode-144344-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-144344 node stop m03: (1.214610796s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-144344 status: exit status 7 (516.48838ms)

                                                
                                                
-- stdout --
	multinode-144344
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-144344-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-144344-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-144344 status --alsologtostderr: exit status 7 (507.398391ms)

                                                
                                                
-- stdout --
	multinode-144344
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-144344-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-144344-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:32:26.262388  177089 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:32:26.262608  177089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:32:26.262620  177089 out.go:358] Setting ErrFile to fd 2...
	I0828 17:32:26.262626  177089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:32:26.262914  177089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
	I0828 17:32:26.263667  177089 out.go:352] Setting JSON to false
	I0828 17:32:26.263779  177089 mustload.go:65] Loading cluster: multinode-144344
	I0828 17:32:26.264336  177089 config.go:182] Loaded profile config "multinode-144344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 17:32:26.264352  177089 status.go:255] checking status of multinode-144344 ...
	I0828 17:32:26.264962  177089 notify.go:220] Checking for updates...
	I0828 17:32:26.265317  177089 cli_runner.go:164] Run: docker container inspect multinode-144344 --format={{.State.Status}}
	I0828 17:32:26.286393  177089 status.go:330] multinode-144344 host status = "Running" (err=<nil>)
	I0828 17:32:26.286417  177089 host.go:66] Checking if "multinode-144344" exists ...
	I0828 17:32:26.286739  177089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-144344
	I0828 17:32:26.307655  177089 host.go:66] Checking if "multinode-144344" exists ...
	I0828 17:32:26.307959  177089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:32:26.308008  177089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-144344
	I0828 17:32:26.326105  177089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/multinode-144344/id_rsa Username:docker}
	I0828 17:32:26.420774  177089 ssh_runner.go:195] Run: systemctl --version
	I0828 17:32:26.425147  177089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:32:26.439416  177089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 17:32:26.501018  177089 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-28 17:32:26.489702286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0828 17:32:26.501606  177089 kubeconfig.go:125] found "multinode-144344" server: "https://192.168.67.2:8443"
	I0828 17:32:26.501639  177089 api_server.go:166] Checking apiserver status ...
	I0828 17:32:26.501690  177089 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:32:26.512856  177089 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2275/cgroup
	I0828 17:32:26.522305  177089 api_server.go:182] apiserver freezer: "8:freezer:/docker/d5698d5c848669abecc9d10065a87ae2bf3e9aa6aa953c03b5d48c911f2a9928/kubepods/burstable/pod7f87bc7a34353bdc8d3d1b22c14c4bee/633a4ea89dfe0058f39ac5e1502c2c747fa850ce4c04802c810410af7b16c014"
	I0828 17:32:26.522392  177089 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d5698d5c848669abecc9d10065a87ae2bf3e9aa6aa953c03b5d48c911f2a9928/kubepods/burstable/pod7f87bc7a34353bdc8d3d1b22c14c4bee/633a4ea89dfe0058f39ac5e1502c2c747fa850ce4c04802c810410af7b16c014/freezer.state
	I0828 17:32:26.531736  177089 api_server.go:204] freezer state: "THAWED"
	I0828 17:32:26.531764  177089 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0828 17:32:26.539689  177089 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0828 17:32:26.539726  177089 status.go:422] multinode-144344 apiserver status = Running (err=<nil>)
	I0828 17:32:26.539738  177089 status.go:257] multinode-144344 status: &{Name:multinode-144344 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:32:26.539756  177089 status.go:255] checking status of multinode-144344-m02 ...
	I0828 17:32:26.540081  177089 cli_runner.go:164] Run: docker container inspect multinode-144344-m02 --format={{.State.Status}}
	I0828 17:32:26.556318  177089 status.go:330] multinode-144344-m02 host status = "Running" (err=<nil>)
	I0828 17:32:26.556346  177089 host.go:66] Checking if "multinode-144344-m02" exists ...
	I0828 17:32:26.556657  177089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-144344-m02
	I0828 17:32:26.575211  177089 host.go:66] Checking if "multinode-144344-m02" exists ...
	I0828 17:32:26.575577  177089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:32:26.575623  177089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-144344-m02
	I0828 17:32:26.593023  177089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19529-2268/.minikube/machines/multinode-144344-m02/id_rsa Username:docker}
	I0828 17:32:26.688257  177089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:32:26.700626  177089 status.go:257] multinode-144344-m02 status: &{Name:multinode-144344-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:32:26.700678  177089 status.go:255] checking status of multinode-144344-m03 ...
	I0828 17:32:26.700987  177089 cli_runner.go:164] Run: docker container inspect multinode-144344-m03 --format={{.State.Status}}
	I0828 17:32:26.717267  177089 status.go:330] multinode-144344-m03 host status = "Stopped" (err=<nil>)
	I0828 17:32:26.717290  177089 status.go:343] host is not running, skipping remaining checks
	I0828 17:32:26.717298  177089 status.go:257] multinode-144344-m03 status: &{Name:multinode-144344-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-144344 node start m03 -v=7 --alsologtostderr: (10.426094514s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-144344
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-144344
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-144344: (22.930454104s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-144344 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-144344 --wait=true -v=8 --alsologtostderr: (1m15.461404385s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-144344
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-144344 node delete m03: (4.951728767s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-144344 stop: (21.387617687s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-144344 status: exit status 7 (96.520458ms)

                                                
                                                
-- stdout --
	multinode-144344
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-144344-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-144344 status --alsologtostderr: exit status 7 (94.533649ms)

                                                
                                                
-- stdout --
	multinode-144344
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-144344-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:34:43.608842  190623 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:34:43.609046  190623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:34:43.609059  190623 out.go:358] Setting ErrFile to fd 2...
	I0828 17:34:43.609064  190623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:34:43.609320  190623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-2268/.minikube/bin
	I0828 17:34:43.609565  190623 out.go:352] Setting JSON to false
	I0828 17:34:43.609631  190623 mustload.go:65] Loading cluster: multinode-144344
	I0828 17:34:43.609715  190623 notify.go:220] Checking for updates...
	I0828 17:34:43.610945  190623 config.go:182] Loaded profile config "multinode-144344": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 17:34:43.610972  190623 status.go:255] checking status of multinode-144344 ...
	I0828 17:34:43.611646  190623 cli_runner.go:164] Run: docker container inspect multinode-144344 --format={{.State.Status}}
	I0828 17:34:43.629014  190623 status.go:330] multinode-144344 host status = "Stopped" (err=<nil>)
	I0828 17:34:43.629038  190623 status.go:343] host is not running, skipping remaining checks
	I0828 17:34:43.629046  190623 status.go:257] multinode-144344 status: &{Name:multinode-144344 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:34:43.629080  190623 status.go:255] checking status of multinode-144344-m02 ...
	I0828 17:34:43.629387  190623 cli_runner.go:164] Run: docker container inspect multinode-144344-m02 --format={{.State.Status}}
	I0828 17:34:43.657028  190623 status.go:330] multinode-144344-m02 host status = "Stopped" (err=<nil>)
	I0828 17:34:43.657049  190623 status.go:343] host is not running, skipping remaining checks
	I0828 17:34:43.657056  190623 status.go:257] multinode-144344-m02 status: &{Name:multinode-144344-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (61.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-144344 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0828 17:34:59.400911    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:35:23.162017    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-144344 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m0.720111159s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-144344 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (61.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-144344
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-144344-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-144344-m02 --driver=docker  --container-runtime=docker: exit status 14 (125.048144ms)

                                                
                                                
-- stdout --
	* [multinode-144344-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-144344-m02' is duplicated with machine name 'multinode-144344-m02' in profile 'multinode-144344'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-144344-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-144344-m03 --driver=docker  --container-runtime=docker: (35.219133315s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-144344
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-144344: exit status 80 (326.354555ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-144344 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-144344-m03 already exists in multinode-144344-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-144344-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-144344-m03: (2.114515s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.87s)

                                                
                                    
x
+
TestPreload (142.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-785896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-785896 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m43.579980438s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-785896 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-785896 image pull gcr.io/k8s-minikube/busybox: (2.138026186s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-785896
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-785896: (10.952987779s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-785896 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-785896 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (23.543743969s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-785896 image list
helpers_test.go:175: Cleaning up "test-preload-785896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-785896
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-785896: (2.243466269s)
--- PASS: TestPreload (142.75s)

                                                
                                    
x
+
TestScheduledStopUnix (104.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-481340 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-481340 --memory=2048 --driver=docker  --container-runtime=docker: (31.448143338s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-481340 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-481340 -n scheduled-stop-481340
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-481340 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-481340 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-481340 -n scheduled-stop-481340
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-481340
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-481340 --schedule 15s
E0828 17:39:59.401108    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0828 17:40:23.162868    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-481340
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-481340: exit status 7 (64.676257ms)

                                                
                                                
-- stdout --
	scheduled-stop-481340
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-481340 -n scheduled-stop-481340
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-481340 -n scheduled-stop-481340: exit status 7 (65.6556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-481340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-481340
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-481340: (1.678024629s)
--- PASS: TestScheduledStopUnix (104.68s)

                                                
                                    
x
+
TestSkaffold (114.94s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3555366030 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-454253 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-454253 --memory=2600 --driver=docker  --container-runtime=docker: (29.159932783s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3555366030 run --minikube-profile skaffold-454253 --kube-context skaffold-454253 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3555366030 run --minikube-profile skaffold-454253 --kube-context skaffold-454253 --status-check=true --port-forward=false --interactive=false: (1m10.136244619s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-555cf4dfb-d42zn" [4aeb7a4f-ce4b-41a0-aa5f-d5dd9b899bf4] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003981314s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5f674766c7-jp2d6" [da9f598a-79c4-4154-8ec4-a33d3ef18227] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003824152s
helpers_test.go:175: Cleaning up "skaffold-454253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-454253
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-454253: (2.956912444s)
--- PASS: TestSkaffold (114.94s)

                                                
                                    
x
+
TestInsufficientStorage (11.5s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-313039 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-313039 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.209448275s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"43998490-47c7-4f60-b636-f5add2688774","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-313039] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c2803bc-d42d-4def-8421-3b6231063054","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19529"}}
	{"specversion":"1.0","id":"5385ee76-6f77-4819-a4b8-233b3ecc8e48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cee1e235-ee91-4867-8181-cb9170ecb4d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig"}}
	{"specversion":"1.0","id":"719e8c15-aa12-422d-850d-5991a0bce77a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube"}}
	{"specversion":"1.0","id":"288ce4e5-b4ea-4e07-9c2d-2db757ff1293","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1ab7ce5a-9e05-4043-9cae-3c49a754ed5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"50a31595-9786-4db2-ad54-2a6b87d64331","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"eec4510e-2aff-4a80-bc19-9d285ae5e5e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"75f4edac-aded-4695-bbd4-547b6b462241","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c88cb564-879d-48bf-aad2-1491f2c66f03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5209e923-fd00-4875-a1b8-09d2e1542b8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-313039\" primary control-plane node in \"insufficient-storage-313039\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f644846b-ae3e-448c-a0e4-59e065d85b69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724775115-19521 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c780468-2cfb-4727-a0ff-d3f42bb28301","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"19754774-eb2c-4ccd-9e5a-0d2f492d4b42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-313039 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-313039 --output=json --layout=cluster: exit status 7 (281.989784ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-313039","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-313039","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 17:42:38.793052  224835 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-313039" does not appear in /home/jenkins/minikube-integration/19529-2268/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-313039 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-313039 --output=json --layout=cluster: exit status 7 (310.412789ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-313039","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-313039","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 17:42:39.101570  224899 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-313039" does not appear in /home/jenkins/minikube-integration/19529-2268/kubeconfig
	E0828 17:42:39.114624  224899 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/insufficient-storage-313039/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-313039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-313039
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-313039: (1.693650046s)
--- PASS: TestInsufficientStorage (11.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (104.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3508142125 start -p running-upgrade-532892 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0828 17:48:37.270376    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3508142125 start -p running-upgrade-532892 --memory=2200 --vm-driver=docker  --container-runtime=docker: (55.843700991s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-532892 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-532892 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.615411959s)
helpers_test.go:175: Cleaning up "running-upgrade-532892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-532892
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-532892: (2.239394009s)
--- PASS: TestRunningBinaryUpgrade (104.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (372.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-712074 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0828 17:44:59.401132    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-712074 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.029737698s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-712074
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-712074: (1.282029486s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-712074 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-712074 status --format={{.Host}}: exit status 7 (68.504468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-712074 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-712074 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m42.771792852s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-712074 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-712074 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-712074 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (112.209491ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-712074] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-712074
	    minikube start -p kubernetes-upgrade-712074 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7120742 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-712074 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-712074 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-712074 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.806930014s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-712074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-712074
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-712074: (2.469120975s)
--- PASS: TestKubernetesUpgrade (372.66s)

                                                
                                    
x
+
TestMissingContainerUpgrade (176.94s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1846294153 start -p missing-upgrade-717509 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1846294153 start -p missing-upgrade-717509 --memory=2200 --driver=docker  --container-runtime=docker: (1m41.602786875s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-717509
E0828 17:45:23.161476    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-717509: (10.553810317s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-717509
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-717509 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-717509 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m1.219274668s)
helpers_test.go:175: Cleaning up "missing-upgrade-717509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-717509
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-717509: (2.223396968s)
--- PASS: TestMissingContainerUpgrade (176.94s)

                                                
                                    
x
+
TestPause/serial/Start (89.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-757667 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0828 17:43:26.230391    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-757667 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m29.991135787s)
--- PASS: TestPause/serial/Start (89.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-757667 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-757667 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.40293552s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.42s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-757667 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-757667 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-757667 --output=json --layout=cluster: exit status 2 (429.90603ms)

                                                
                                                
-- stdout --
	{"Name":"pause-757667","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-757667","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-757667 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-757667 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-757667 --alsologtostderr -v=5: (1.019446647s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.32s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-757667 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-757667 --alsologtostderr -v=5: (2.323832336s)
--- PASS: TestPause/serial/DeletePaused (2.32s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-757667
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-757667: exit status 1 (19.795135ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-757667: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (91.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3805199769 start -p stopped-upgrade-089249 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0828 17:47:15.333206    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:47:15.339803    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:47:15.351141    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:47:15.372453    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:47:15.413784    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:47:15.495080    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:47:15.656435    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:47:15.978154    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:47:16.620060    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:47:17.901355    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:47:20.462662    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3805199769 start -p stopped-upgrade-089249 --memory=2200 --vm-driver=docker  --container-runtime=docker: (48.946912156s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3805199769 -p stopped-upgrade-089249 stop
E0828 17:47:25.584733    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:47:35.826945    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3805199769 -p stopped-upgrade-089249 stop: (10.946650467s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-089249 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0828 17:47:56.308307    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:48:02.471465    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-089249 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.640011417s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (91.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-089249
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-089249: (1.689653917s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-700622 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-700622 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (81.235691ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-700622] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-2268/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-2268/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-700622 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-700622 --driver=docker  --container-runtime=docker: (43.869818344s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-700622 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-700622 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-700622 --no-kubernetes --driver=docker  --container-runtime=docker: (16.349711629s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-700622 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-700622 status -o json: exit status 2 (507.664171ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-700622","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-700622
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-700622: (1.905668276s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (13.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-700622 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-700622 --no-kubernetes --driver=docker  --container-runtime=docker: (13.114399699s)
--- PASS: TestNoKubernetes/serial/Start (13.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-700622 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-700622 "sudo systemctl is-active --quiet service kubelet": exit status 1 (572.249754ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-700622
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-700622: (1.253587849s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-700622 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-700622 --driver=docker  --container-runtime=docker: (8.556936243s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-700622 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-700622 "sudo systemctl is-active --quiet service kubelet": exit status 1 (343.84151ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (140.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-040537 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0828 17:54:59.401171    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:55:23.161301    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-040537 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m20.085078377s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (140.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-040537 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9e3f4f4b-5a4e-41a8-9b9a-b540250d0676] Pending
helpers_test.go:344: "busybox" [9e3f4f4b-5a4e-41a8-9b9a-b540250d0676] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9e3f4f4b-5a4e-41a8-9b9a-b540250d0676] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.018802353s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-040537 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-378935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-378935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m3.340900865s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-040537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-040537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.595965345s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-040537 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-040537 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-040537 --alsologtostderr -v=3: (12.018651476s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-040537 -n old-k8s-version-040537
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-040537 -n old-k8s-version-040537: exit status 7 (91.340308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-040537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (378.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-040537 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-040537 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (6m18.278558147s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-040537 -n old-k8s-version-040537
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (378.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-378935 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [42390f44-ab45-42d4-816a-158f38045490] Pending
helpers_test.go:344: "busybox" [42390f44-ab45-42d4-816a-158f38045490] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [42390f44-ab45-42d4-816a-158f38045490] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004306684s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-378935 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-378935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-378935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032815376s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-378935 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-378935 --alsologtostderr -v=3
E0828 17:57:15.332299    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-378935 --alsologtostderr -v=3: (10.875986995s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-378935 -n no-preload-378935
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-378935 -n no-preload-378935: exit status 7 (70.324491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-378935 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-378935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0828 17:59:59.401729    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:00:06.231894    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:00:23.161333    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-378935 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m26.117372906s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-378935 -n no-preload-378935
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jp4gg" [d1b7c2da-63fd-4914-bc1d-dc594b345ff5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003800643s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jp4gg" [d1b7c2da-63fd-4914-bc1d-dc594b345ff5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003610793s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-378935 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-378935 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-378935 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-378935 -n no-preload-378935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-378935 -n no-preload-378935: exit status 2 (334.756164ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-378935 -n no-preload-378935
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-378935 -n no-preload-378935: exit status 2 (324.080281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-378935 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-378935 -n no-preload-378935
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-378935 -n no-preload-378935
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-937112 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0828 18:02:15.332372    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-937112 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m21.527296002s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-255n4" [4c8c3d0d-7a6a-4ea5-853b-d3e9fce0d9ff] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003937105s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-255n4" [4c8c3d0d-7a6a-4ea5-853b-d3e9fce0d9ff] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00825299s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-040537 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-040537 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-040537 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-040537 -n old-k8s-version-040537
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-040537 -n old-k8s-version-040537: exit status 2 (464.586538ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-040537 -n old-k8s-version-040537
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-040537 -n old-k8s-version-040537: exit status 2 (395.21573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-040537 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-040537 -n old-k8s-version-040537
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-040537 -n old-k8s-version-040537
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-292227 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-292227 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (43.704787087s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-937112 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dbc46000-41eb-45e8-b767-e43551124e81] Pending
helpers_test.go:344: "busybox" [dbc46000-41eb-45e8-b767-e43551124e81] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dbc46000-41eb-45e8-b767-e43551124e81] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003431078s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-937112 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-292227 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ad28fcbc-64a2-4f0c-81a8-7be2597883af] Pending
helpers_test.go:344: "busybox" [ad28fcbc-64a2-4f0c-81a8-7be2597883af] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ad28fcbc-64a2-4f0c-81a8-7be2597883af] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00562957s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-292227 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-937112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-937112 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-937112 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-937112 --alsologtostderr -v=3: (10.97459748s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-292227 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0828 18:03:38.396514    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-292227 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-292227 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-292227 --alsologtostderr -v=3: (10.965340581s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-937112 -n embed-certs-937112
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-937112 -n embed-certs-937112: exit status 7 (75.130413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-937112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (270.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-937112 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-937112 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m30.310135636s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-937112 -n embed-certs-937112
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (270.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-292227 -n default-k8s-diff-port-292227
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-292227 -n default-k8s-diff-port-292227: exit status 7 (80.874914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-292227 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-292227 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0828 18:04:42.473643    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:04:59.401399    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:23.161758    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/addons-161312/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:42.435819    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:42.442207    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:42.453609    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:42.475177    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:42.516619    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:42.598216    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:42.759798    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:43.081544    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:43.722911    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:45.004783    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:47.566399    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:05:52.687835    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:02.930987    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:23.412688    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:56.104570    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:56.110953    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:56.122394    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:56.143928    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:56.185352    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:56.266783    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:56.428395    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:56.749846    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:57.391123    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:06:58.672607    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:07:01.234498    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:07:04.374964    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:07:06.356940    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:07:15.332557    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:07:16.599165    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:07:37.080667    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-292227 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m32.074083533s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-292227 -n default-k8s-diff-port-292227
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-57jpt" [353622f9-0469-4ab6-8ec4-2b584e42b338] Running
E0828 18:08:18.042184    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004273866s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b9q6r" [21b75dd8-61ff-40b8-9b2e-bffa870d250e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00329561s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-57jpt" [353622f9-0469-4ab6-8ec4-2b584e42b338] Running
E0828 18:08:26.296885    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004343009s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-937112 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b9q6r" [21b75dd8-61ff-40b8-9b2e-bffa870d250e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003409024s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-292227 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-937112 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-937112 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-937112 -n embed-certs-937112
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-937112 -n embed-certs-937112: exit status 2 (326.634926ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-937112 -n embed-certs-937112
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-937112 -n embed-certs-937112: exit status 2 (370.761606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-937112 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-937112 -n embed-certs-937112
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-937112 -n embed-certs-937112
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-292227 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-292227 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-292227 -n default-k8s-diff-port-292227
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-292227 -n default-k8s-diff-port-292227: exit status 2 (367.763766ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-292227 -n default-k8s-diff-port-292227
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-292227 -n default-k8s-diff-port-292227: exit status 2 (404.641202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-292227 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-292227 -n default-k8s-diff-port-292227
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-292227 -n default-k8s-diff-port-292227
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-236748 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-236748 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (46.569201781s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m20.138104622s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-236748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-236748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.295797717s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-236748 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-236748 --alsologtostderr -v=3: (11.07256448s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-236748 -n newest-cni-236748
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-236748 -n newest-cni-236748: exit status 7 (63.063022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-236748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-236748 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0828 18:09:39.964131    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/no-preload-378935/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-236748 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (18.55121449s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-236748 -n newest-cni-236748
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-236748 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-236748 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-236748 -n newest-cni-236748
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-236748 -n newest-cni-236748: exit status 2 (443.443496ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-236748 -n newest-cni-236748
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-236748 -n newest-cni-236748: exit status 2 (424.658252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-236748 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-236748 -n newest-cni-236748
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-236748 -n newest-cni-236748
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.69s)
E0828 18:16:58.191876    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:58.198234    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:58.209614    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:58.230983    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:58.272343    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:58.353872    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:58.515200    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:58.836614    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:59.478212    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:00.760239    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:03.322272    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:08.444366    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m15.873261866s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-456967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-456967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-42kbf" [338bcf7c-2ce9-4dac-acf4-ab12b51ef101] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-42kbf" [338bcf7c-2ce9-4dac-acf4-ab12b51ef101] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004007554s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-456967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0828 18:11:10.139039    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m15.681798503s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qc8ll" [f82677fb-a614-4c8d-9b5e-c5725f370ee5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003689664s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-456967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-456967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9tmdb" [11de7b56-5b5e-4d84-b430-51b86a55a3f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9tmdb" [11de7b56-5b5e-4d84-b430-51b86a55a3f4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004553949s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-456967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-79vsk" [d9e120ee-3210-47e8-ba43-7faca25ec56d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00671575s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m2.545490361s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-456967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-456967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tgh72" [a3f6b3dd-2c83-4ff4-9cb9-886cd92cda6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tgh72" [a3f6b3dd-2c83-4ff4-9cb9-886cd92cda6e] Running
E0828 18:12:15.332760    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.007931784s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-456967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (81.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m21.732445632s)
--- PASS: TestNetworkPlugins/group/false/Start (81.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-456967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-456967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jtfx5" [5ca1d5db-aa60-4aa2-b378-2b1ee50c2b25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jtfx5" [5ca1d5db-aa60-4aa2-b378-2b1ee50c2b25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005147063s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-456967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0828 18:13:48.319084    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/default-k8s-diff-port-292227/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m12.969173394s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-456967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-456967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zkxlr" [1302ca07-ccbc-434c-8ea9-74fa3f5c6272] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0828 18:14:08.800743    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/default-k8s-diff-port-292227/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-zkxlr" [1302ca07-ccbc-434c-8ea9-74fa3f5c6272] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.004032278s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-456967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0828 18:14:49.762923    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/default-k8s-diff-port-292227/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (57.67708197s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-456967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-456967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2qvrk" [00c9e7cf-3046-4b00-b522-528a2bf8d8ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2qvrk" [00c9e7cf-3046-4b00-b522-528a2bf8d8ee] Running
E0828 18:14:59.400833    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/functional-154367/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004364957s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-456967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (50.612374233s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zmlmg" [81b2e8dd-6495-453b-9e75-21ffd6d8337f] Running
E0828 18:15:42.435332    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/old-k8s-version-040537/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006151572s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-456967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-456967 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5bfdl" [301fee29-4dad-439d-a748-fa30198a4556] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0828 18:15:45.126973    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/auto-456967/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-5bfdl" [301fee29-4dad-439d-a748-fa30198a4556] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.008038535s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-456967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-456967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-456967 replace --force -f testdata/netcat-deployment.yaml
E0828 18:16:18.804111    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/kindnet-456967/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-km755" [7beca773-6241-486c-aa77-a72c2d4c7bc3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-km755" [7beca773-6241-486c-aa77-a72c2d4c7bc3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.005504114s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (54.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0828 18:16:21.366914    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/kindnet-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:26.091492    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/auto-456967/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:26.488734    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/kindnet-456967/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-456967 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (54.970842517s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (54.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-456967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-456967 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-456967 replace --force -f testdata/netcat-deployment.yaml
E0828 18:17:15.332348    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/skaffold-454253/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vhzq6" [992032b4-0a13-4b71-adf0-be93f09c8e4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0828 18:17:18.686094    7584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-2268/.minikube/profiles/calico-456967/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vhzq6" [992032b4-0a13-4b71-adf0-be93f09c8e4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003379718s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-456967 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-456967 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-651207 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-651207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-651207
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-126810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-126810
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-456967 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-456967" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-456967

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-456967" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-456967"

                                                
                                                
----------------------- debugLogs end: cilium-456967 [took: 4.745952489s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-456967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-456967
--- SKIP: TestNetworkPlugins/group/cilium (4.92s)

                                                
                                    
Copied to clipboard