Test Report: Docker_Linux_docker_arm64 19522

                    
                      d15490255971b1813e1f056874620592048fd695:2024-08-28:35972
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 75.64
x
+
TestAddons/parallel/Registry (75.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.086676ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-bg785" [f77f0c2c-2c65-4211-879a-30b245bc30e8] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.011818334s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-46gqr" [4c757aa1-4447-48a4-9113-9bef04a988f4] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.024380518s
addons_test.go:342: (dbg) Run:  kubectl --context addons-958846 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-958846 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-958846 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.117018601s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-958846 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-958846
helpers_test.go:235: (dbg) docker inspect addons-958846:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "baa2108f9526e409b9b3e9fd1c71bd3b7495b0182c84c47bd50977e5bbb8e375",
	        "Created": "2024-08-27T22:41:35.001801199Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1744513,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-27T22:41:35.14459075Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0985147309945253cbe7e881ef8b47b2eeae8c92bbeecfbcb5398ea2f50c97c6",
	        "ResolvConfPath": "/var/lib/docker/containers/baa2108f9526e409b9b3e9fd1c71bd3b7495b0182c84c47bd50977e5bbb8e375/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/baa2108f9526e409b9b3e9fd1c71bd3b7495b0182c84c47bd50977e5bbb8e375/hostname",
	        "HostsPath": "/var/lib/docker/containers/baa2108f9526e409b9b3e9fd1c71bd3b7495b0182c84c47bd50977e5bbb8e375/hosts",
	        "LogPath": "/var/lib/docker/containers/baa2108f9526e409b9b3e9fd1c71bd3b7495b0182c84c47bd50977e5bbb8e375/baa2108f9526e409b9b3e9fd1c71bd3b7495b0182c84c47bd50977e5bbb8e375-json.log",
	        "Name": "/addons-958846",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-958846:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-958846",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e3ee8752af968318d1a54f76374ba8f24a08de691abd4c8f933fc96b355328d6-init/diff:/var/lib/docker/overlay2/4e3cff8a6313e34adc7c7c9c381cc06c51f4e2d3b13b46a6cdfa44f196510032/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e3ee8752af968318d1a54f76374ba8f24a08de691abd4c8f933fc96b355328d6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e3ee8752af968318d1a54f76374ba8f24a08de691abd4c8f933fc96b355328d6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e3ee8752af968318d1a54f76374ba8f24a08de691abd4c8f933fc96b355328d6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-958846",
	                "Source": "/var/lib/docker/volumes/addons-958846/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-958846",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-958846",
	                "name.minikube.sigs.k8s.io": "addons-958846",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "02c0039c17fd8b26bc4cf0fe6d82ce69e9ce0d9d1c4b99bcf5eaadf6117c08cc",
	            "SandboxKey": "/var/run/docker/netns/02c0039c17fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-958846": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2bdd07bed0e42401a4fd8315bd3f94687b5cf305d2467bd8cc6b8335e6acd5eb",
	                    "EndpointID": "a6c6b38e21b287ec7cf0afaeaa9caf9513d4460874122ae342ed30aa0e62f554",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-958846",
	                        "baa2108f9526"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-958846 -n addons-958846
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-958846 logs -n 25: (1.512510491s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC | 27 Aug 24 22:41 UTC |
	| delete  | -p download-only-871822                                                                     | download-only-871822   | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC | 27 Aug 24 22:41 UTC |
	| start   | -o=json --download-only                                                                     | download-only-770752   | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC |                     |
	|         | -p download-only-770752                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC | 27 Aug 24 22:41 UTC |
	| delete  | -p download-only-770752                                                                     | download-only-770752   | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC | 27 Aug 24 22:41 UTC |
	| delete  | -p download-only-871822                                                                     | download-only-871822   | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC | 27 Aug 24 22:41 UTC |
	| delete  | -p download-only-770752                                                                     | download-only-770752   | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC | 27 Aug 24 22:41 UTC |
	| start   | --download-only -p                                                                          | download-docker-337482 | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC |                     |
	|         | download-docker-337482                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-337482                                                                   | download-docker-337482 | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC | 27 Aug 24 22:41 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-761535   | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC |                     |
	|         | binary-mirror-761535                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41893                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-761535                                                                     | binary-mirror-761535   | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC | 27 Aug 24 22:41 UTC |
	| addons  | disable dashboard -p                                                                        | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC |                     |
	|         | addons-958846                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC |                     |
	|         | addons-958846                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-958846 --wait=true                                                                | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC | 27 Aug 24 22:44 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-958846 addons disable                                                                | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:45 UTC | 27 Aug 24 22:45 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-958846 addons disable                                                                | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:53 UTC | 27 Aug 24 22:53 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-958846 addons                                                                        | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:54 UTC | 27 Aug 24 22:54 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-958846 addons                                                                        | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:54 UTC | 27 Aug 24 22:54 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:54 UTC | 27 Aug 24 22:54 UTC |
	|         | -p addons-958846                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-958846 ssh cat                                                                       | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:54 UTC | 27 Aug 24 22:54 UTC |
	|         | /opt/local-path-provisioner/pvc-aad4af14-49f8-4159-b469-887f08026e79_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-958846 addons disable                                                                | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:54 UTC | 27 Aug 24 22:54 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-958846 ip                                                                            | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:54 UTC | 27 Aug 24 22:54 UTC |
	| addons  | disable cloud-spanner -p                                                                    | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:54 UTC | 27 Aug 24 22:54 UTC |
	|         | addons-958846                                                                               |                        |         |         |                     |                     |
	| addons  | addons-958846 addons disable                                                                | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:54 UTC | 27 Aug 24 22:54 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-958846          | jenkins | v1.33.1 | 27 Aug 24 22:54 UTC |                     |
	|         | -p addons-958846                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 22:41:10
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 22:41:10.084755 1744024 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:41:10.085037 1744024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:41:10.085049 1744024 out.go:358] Setting ErrFile to fd 2...
	I0827 22:41:10.085055 1744024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:41:10.085360 1744024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
	I0827 22:41:10.085955 1744024 out.go:352] Setting JSON to false
	I0827 22:41:10.087185 1744024 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23018,"bootTime":1724775452,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0827 22:41:10.087279 1744024 start.go:139] virtualization:  
	I0827 22:41:10.089356 1744024 out.go:177] * [addons-958846] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0827 22:41:10.091059 1744024 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 22:41:10.091165 1744024 notify.go:220] Checking for updates...
	I0827 22:41:10.093517 1744024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:41:10.095244 1744024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig
	I0827 22:41:10.096489 1744024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube
	I0827 22:41:10.097691 1744024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0827 22:41:10.098994 1744024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 22:41:10.100790 1744024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:41:10.124602 1744024 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 22:41:10.124737 1744024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 22:41:10.194618 1744024 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-27 22:41:10.184339901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 22:41:10.194739 1744024 docker.go:307] overlay module found
	I0827 22:41:10.196420 1744024 out.go:177] * Using the docker driver based on user configuration
	I0827 22:41:10.197608 1744024 start.go:297] selected driver: docker
	I0827 22:41:10.197630 1744024 start.go:901] validating driver "docker" against <nil>
	I0827 22:41:10.197647 1744024 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 22:41:10.198325 1744024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 22:41:10.251130 1744024 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-27 22:41:10.241658166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 22:41:10.251297 1744024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 22:41:10.251523 1744024 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:41:10.253118 1744024 out.go:177] * Using Docker driver with root privileges
	I0827 22:41:10.254353 1744024 cni.go:84] Creating CNI manager for ""
	I0827 22:41:10.254386 1744024 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 22:41:10.254404 1744024 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 22:41:10.254481 1744024 start.go:340] cluster config:
	{Name:addons-958846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-958846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:41:10.256655 1744024 out.go:177] * Starting "addons-958846" primary control-plane node in "addons-958846" cluster
	I0827 22:41:10.257789 1744024 cache.go:121] Beginning downloading kic base image for docker with docker
	I0827 22:41:10.259093 1744024 out.go:177] * Pulling base image v0.0.44-1724667927-19511 ...
	I0827 22:41:10.260149 1744024 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 22:41:10.260176 1744024 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local docker daemon
	I0827 22:41:10.260208 1744024 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-1737862/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 22:41:10.260235 1744024 cache.go:56] Caching tarball of preloaded images
	I0827 22:41:10.260319 1744024 preload.go:172] Found /home/jenkins/minikube-integration/19522-1737862/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 22:41:10.260329 1744024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 22:41:10.260695 1744024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/config.json ...
	I0827 22:41:10.260770 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/config.json: {Name:mkfe1571f39eab5ae77852356f4b6bdcce8b4e69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:10.276178 1744024 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 to local cache
	I0827 22:41:10.276297 1744024 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory
	I0827 22:41:10.276323 1744024 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory, skipping pull
	I0827 22:41:10.276329 1744024 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 exists in cache, skipping pull
	I0827 22:41:10.276342 1744024 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 as a tarball
	I0827 22:41:10.276348 1744024 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 from local cache
	I0827 22:41:27.350554 1744024 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 from cached tarball
	I0827 22:41:27.350596 1744024 cache.go:194] Successfully downloaded all kic artifacts
	I0827 22:41:27.350648 1744024 start.go:360] acquireMachinesLock for addons-958846: {Name:mk07726d4a80793bd24cd17984414d751022c14e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 22:41:27.350773 1744024 start.go:364] duration metric: took 101.167µs to acquireMachinesLock for "addons-958846"
	I0827 22:41:27.350805 1744024 start.go:93] Provisioning new machine with config: &{Name:addons-958846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-958846 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 22:41:27.350902 1744024 start.go:125] createHost starting for "" (driver="docker")
	I0827 22:41:27.352398 1744024 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0827 22:41:27.352664 1744024 start.go:159] libmachine.API.Create for "addons-958846" (driver="docker")
	I0827 22:41:27.352697 1744024 client.go:168] LocalClient.Create starting
	I0827 22:41:27.352829 1744024 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19522-1737862/.minikube/certs/ca.pem
	I0827 22:41:28.530874 1744024 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19522-1737862/.minikube/certs/cert.pem
	I0827 22:41:29.039018 1744024 cli_runner.go:164] Run: docker network inspect addons-958846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0827 22:41:29.054532 1744024 cli_runner.go:211] docker network inspect addons-958846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0827 22:41:29.054639 1744024 network_create.go:284] running [docker network inspect addons-958846] to gather additional debugging logs...
	I0827 22:41:29.054662 1744024 cli_runner.go:164] Run: docker network inspect addons-958846
	W0827 22:41:29.069745 1744024 cli_runner.go:211] docker network inspect addons-958846 returned with exit code 1
	I0827 22:41:29.069776 1744024 network_create.go:287] error running [docker network inspect addons-958846]: docker network inspect addons-958846: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-958846 not found
	I0827 22:41:29.069793 1744024 network_create.go:289] output of [docker network inspect addons-958846]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-958846 not found
	
	** /stderr **
	I0827 22:41:29.069886 1744024 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0827 22:41:29.084549 1744024 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001754740}
	I0827 22:41:29.084599 1744024 network_create.go:124] attempt to create docker network addons-958846 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0827 22:41:29.084659 1744024 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-958846 addons-958846
	I0827 22:41:29.159876 1744024 network_create.go:108] docker network addons-958846 192.168.49.0/24 created
	I0827 22:41:29.159910 1744024 kic.go:121] calculated static IP "192.168.49.2" for the "addons-958846" container
	I0827 22:41:29.159991 1744024 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0827 22:41:29.175470 1744024 cli_runner.go:164] Run: docker volume create addons-958846 --label name.minikube.sigs.k8s.io=addons-958846 --label created_by.minikube.sigs.k8s.io=true
	I0827 22:41:29.192927 1744024 oci.go:103] Successfully created a docker volume addons-958846
	I0827 22:41:29.193027 1744024 cli_runner.go:164] Run: docker run --rm --name addons-958846-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-958846 --entrypoint /usr/bin/test -v addons-958846:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 -d /var/lib
	I0827 22:41:30.794428 1744024 cli_runner.go:217] Completed: docker run --rm --name addons-958846-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-958846 --entrypoint /usr/bin/test -v addons-958846:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 -d /var/lib: (1.601346925s)
	I0827 22:41:30.794471 1744024 oci.go:107] Successfully prepared a docker volume addons-958846
	I0827 22:41:30.794492 1744024 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 22:41:30.794513 1744024 kic.go:194] Starting extracting preloaded images to volume ...
	I0827 22:41:30.794594 1744024 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19522-1737862/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-958846:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 -I lz4 -xf /preloaded.tar -C /extractDir
	I0827 22:41:34.922485 1744024 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19522-1737862/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-958846:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 -I lz4 -xf /preloaded.tar -C /extractDir: (4.127849584s)
	I0827 22:41:34.922521 1744024 kic.go:203] duration metric: took 4.128006175s to extract preloaded images to volume ...
	W0827 22:41:34.922672 1744024 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0827 22:41:34.922782 1744024 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0827 22:41:34.987325 1744024 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-958846 --name addons-958846 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-958846 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-958846 --network addons-958846 --ip 192.168.49.2 --volume addons-958846:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760
	I0827 22:41:35.329743 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Running}}
	I0827 22:41:35.351277 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:41:35.371198 1744024 cli_runner.go:164] Run: docker exec addons-958846 stat /var/lib/dpkg/alternatives/iptables
	I0827 22:41:35.437926 1744024 oci.go:144] the created container "addons-958846" has a running status.
	I0827 22:41:35.437963 1744024 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa...
	I0827 22:41:36.670008 1744024 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0827 22:41:36.696682 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:41:36.712876 1744024 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0827 22:41:36.712901 1744024 kic_runner.go:114] Args: [docker exec --privileged addons-958846 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0827 22:41:36.761472 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:41:36.777888 1744024 machine.go:93] provisionDockerMachine start ...
	I0827 22:41:36.777986 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:41:36.793989 1744024 main.go:141] libmachine: Using SSH client type: native
	I0827 22:41:36.794248 1744024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0827 22:41:36.794263 1744024 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 22:41:36.940751 1744024 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-958846
	
	I0827 22:41:36.940780 1744024 ubuntu.go:169] provisioning hostname "addons-958846"
	I0827 22:41:36.940846 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:41:36.960033 1744024 main.go:141] libmachine: Using SSH client type: native
	I0827 22:41:36.960279 1744024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0827 22:41:36.960298 1744024 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-958846 && echo "addons-958846" | sudo tee /etc/hostname
	I0827 22:41:37.124760 1744024 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-958846
	
	I0827 22:41:37.124842 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:41:37.144068 1744024 main.go:141] libmachine: Using SSH client type: native
	I0827 22:41:37.144308 1744024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0827 22:41:37.144330 1744024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-958846' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-958846/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-958846' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 22:41:37.288527 1744024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 22:41:37.288559 1744024 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19522-1737862/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-1737862/.minikube}
	I0827 22:41:37.288636 1744024 ubuntu.go:177] setting up certificates
	I0827 22:41:37.288646 1744024 provision.go:84] configureAuth start
	I0827 22:41:37.288725 1744024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-958846
	I0827 22:41:37.305410 1744024 provision.go:143] copyHostCerts
	I0827 22:41:37.305493 1744024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1737862/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-1737862/.minikube/key.pem (1679 bytes)
	I0827 22:41:37.305633 1744024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1737862/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-1737862/.minikube/ca.pem (1078 bytes)
	I0827 22:41:37.305697 1744024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1737862/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-1737862/.minikube/cert.pem (1123 bytes)
	I0827 22:41:37.305747 1744024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-1737862/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-1737862/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-1737862/.minikube/certs/ca-key.pem org=jenkins.addons-958846 san=[127.0.0.1 192.168.49.2 addons-958846 localhost minikube]
	I0827 22:41:37.742012 1744024 provision.go:177] copyRemoteCerts
	I0827 22:41:37.742083 1744024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 22:41:37.742124 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:41:37.758684 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:41:37.865769 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0827 22:41:37.889983 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0827 22:41:37.913826 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 22:41:37.938427 1744024 provision.go:87] duration metric: took 649.761315ms to configureAuth
	I0827 22:41:37.938458 1744024 ubuntu.go:193] setting minikube options for container-runtime
	I0827 22:41:37.938652 1744024 config.go:182] Loaded profile config "addons-958846": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 22:41:37.938716 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:41:37.955280 1744024 main.go:141] libmachine: Using SSH client type: native
	I0827 22:41:37.955531 1744024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0827 22:41:37.955548 1744024 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0827 22:41:38.109419 1744024 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0827 22:41:38.109442 1744024 ubuntu.go:71] root file system type: overlay
	I0827 22:41:38.109558 1744024 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0827 22:41:38.109636 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:41:38.127189 1744024 main.go:141] libmachine: Using SSH client type: native
	I0827 22:41:38.127446 1744024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0827 22:41:38.127526 1744024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0827 22:41:38.284507 1744024 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0827 22:41:38.284600 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:41:38.301162 1744024 main.go:141] libmachine: Using SSH client type: native
	I0827 22:41:38.301408 1744024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0827 22:41:38.301430 1744024 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0827 22:41:39.079430 1744024 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-08-12 11:49:05.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-08-27 22:41:38.277961913 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0827 22:41:39.079470 1744024 machine.go:96] duration metric: took 2.30155978s to provisionDockerMachine
	I0827 22:41:39.079482 1744024 client.go:171] duration metric: took 11.726771139s to LocalClient.Create
	I0827 22:41:39.079496 1744024 start.go:167] duration metric: took 11.726832086s to libmachine.API.Create "addons-958846"
	I0827 22:41:39.079504 1744024 start.go:293] postStartSetup for "addons-958846" (driver="docker")
	I0827 22:41:39.079518 1744024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 22:41:39.079592 1744024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 22:41:39.079639 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:41:39.096991 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:41:39.198072 1744024 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 22:41:39.201440 1744024 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0827 22:41:39.201482 1744024 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0827 22:41:39.201494 1744024 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0827 22:41:39.201503 1744024 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0827 22:41:39.201515 1744024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-1737862/.minikube/addons for local assets ...
	I0827 22:41:39.201601 1744024 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-1737862/.minikube/files for local assets ...
	I0827 22:41:39.201636 1744024 start.go:296] duration metric: took 122.122066ms for postStartSetup
	I0827 22:41:39.202035 1744024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-958846
	I0827 22:41:39.218424 1744024 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/config.json ...
	I0827 22:41:39.218735 1744024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 22:41:39.218793 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:41:39.238781 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:41:39.338113 1744024 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0827 22:41:39.343884 1744024 start.go:128] duration metric: took 11.992966379s to createHost
	I0827 22:41:39.343909 1744024 start.go:83] releasing machines lock for "addons-958846", held for 11.993123635s
	I0827 22:41:39.344000 1744024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-958846
	I0827 22:41:39.364178 1744024 ssh_runner.go:195] Run: cat /version.json
	I0827 22:41:39.364200 1744024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 22:41:39.364250 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:41:39.364273 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:41:39.383742 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:41:39.386491 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:41:39.480020 1744024 ssh_runner.go:195] Run: systemctl --version
	I0827 22:41:39.630576 1744024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0827 22:41:39.635212 1744024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0827 22:41:39.662890 1744024 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0827 22:41:39.662970 1744024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 22:41:39.691726 1744024 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0827 22:41:39.691802 1744024 start.go:495] detecting cgroup driver to use...
	I0827 22:41:39.691849 1744024 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0827 22:41:39.692006 1744024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:41:39.708394 1744024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0827 22:41:39.718530 1744024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0827 22:41:39.728542 1744024 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0827 22:41:39.728647 1744024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0827 22:41:39.739375 1744024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 22:41:39.749111 1744024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0827 22:41:39.759043 1744024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 22:41:39.769068 1744024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 22:41:39.778521 1744024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0827 22:41:39.788335 1744024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0827 22:41:39.798472 1744024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0827 22:41:39.808647 1744024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 22:41:39.817130 1744024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 22:41:39.825993 1744024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:41:39.909287 1744024 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0827 22:41:40.033366 1744024 start.go:495] detecting cgroup driver to use...
	I0827 22:41:40.033435 1744024 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0827 22:41:40.033501 1744024 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0827 22:41:40.048668 1744024 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0827 22:41:40.048762 1744024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0827 22:41:40.062705 1744024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 22:41:40.081270 1744024 ssh_runner.go:195] Run: which cri-dockerd
	I0827 22:41:40.085874 1744024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0827 22:41:40.096697 1744024 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0827 22:41:40.123013 1744024 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0827 22:41:40.230809 1744024 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0827 22:41:40.335955 1744024 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0827 22:41:40.336168 1744024 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0827 22:41:40.356332 1744024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:41:40.449842 1744024 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0827 22:41:40.711028 1744024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0827 22:41:40.723262 1744024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0827 22:41:40.736878 1744024 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0827 22:41:40.827037 1744024 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0827 22:41:40.909824 1744024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:41:40.998387 1744024 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0827 22:41:41.013915 1744024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0827 22:41:41.025979 1744024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:41:41.113589 1744024 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0827 22:41:41.181641 1744024 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0827 22:41:41.181750 1744024 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0827 22:41:41.185473 1744024 start.go:563] Will wait 60s for crictl version
	I0827 22:41:41.185556 1744024 ssh_runner.go:195] Run: which crictl
	I0827 22:41:41.189239 1744024 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 22:41:41.233351 1744024 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0827 22:41:41.233464 1744024 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0827 22:41:41.254859 1744024 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0827 22:41:41.277758 1744024 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0827 22:41:41.277936 1744024 cli_runner.go:164] Run: docker network inspect addons-958846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0827 22:41:41.293955 1744024 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0827 22:41:41.297720 1744024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:41:41.309030 1744024 kubeadm.go:883] updating cluster {Name:addons-958846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-958846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 22:41:41.309152 1744024 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 22:41:41.309222 1744024 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0827 22:41:41.328506 1744024 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0827 22:41:41.328531 1744024 docker.go:615] Images already preloaded, skipping extraction
	I0827 22:41:41.328605 1744024 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0827 22:41:41.349017 1744024 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0827 22:41:41.349040 1744024 cache_images.go:84] Images are preloaded, skipping loading
	I0827 22:41:41.349060 1744024 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0827 22:41:41.349161 1744024 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-958846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-958846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 22:41:41.349235 1744024 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0827 22:41:41.397683 1744024 cni.go:84] Creating CNI manager for ""
	I0827 22:41:41.397712 1744024 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 22:41:41.397723 1744024 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 22:41:41.397762 1744024 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-958846 NodeName:addons-958846 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 22:41:41.397958 1744024 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-958846"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 22:41:41.398036 1744024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 22:41:41.406976 1744024 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 22:41:41.407047 1744024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 22:41:41.415629 1744024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0827 22:41:41.433286 1744024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 22:41:41.452092 1744024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0827 22:41:41.470686 1744024 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0827 22:41:41.474304 1744024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 22:41:41.485215 1744024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:41:41.568719 1744024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:41:41.592548 1744024 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846 for IP: 192.168.49.2
	I0827 22:41:41.592576 1744024 certs.go:194] generating shared ca certs ...
	I0827 22:41:41.592594 1744024 certs.go:226] acquiring lock for ca certs: {Name:mk4c8b8fb21c269ce29051fe6a934a6f77785af2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:41.593216 1744024 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-1737862/.minikube/ca.key
	I0827 22:41:41.886066 1744024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-1737862/.minikube/ca.crt ...
	I0827 22:41:41.886104 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/ca.crt: {Name:mkeb4b80356c44977921738e1a5ebfc3aace8424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:41.886310 1744024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-1737862/.minikube/ca.key ...
	I0827 22:41:41.886326 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/ca.key: {Name:mkc76246a26ff766acb74086bd9615770d341476 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:41.886418 1744024 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-1737862/.minikube/proxy-client-ca.key
	I0827 22:41:42.393566 1744024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-1737862/.minikube/proxy-client-ca.crt ...
	I0827 22:41:42.393605 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/proxy-client-ca.crt: {Name:mkc488ff90adc56d8e70fbf54de25d6c82705b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:42.394561 1744024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-1737862/.minikube/proxy-client-ca.key ...
	I0827 22:41:42.394580 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/proxy-client-ca.key: {Name:mk56c70e0a7a035e95db9082d7cbb11c4a9b5da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:42.394697 1744024 certs.go:256] generating profile certs ...
	I0827 22:41:42.394768 1744024 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.key
	I0827 22:41:42.394787 1744024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt with IP's: []
	I0827 22:41:42.929734 1744024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt ...
	I0827 22:41:42.929769 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: {Name:mk1f1b3e925fec72a04f51a873ab7eb753d5183a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:42.930408 1744024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.key ...
	I0827 22:41:42.930427 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.key: {Name:mk1f50e0ffeaffd490fdb7faebdaa66e0856f1eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:42.930517 1744024 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.key.e2d35b43
	I0827 22:41:42.930540 1744024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.crt.e2d35b43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0827 22:41:43.338523 1744024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.crt.e2d35b43 ...
	I0827 22:41:43.338557 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.crt.e2d35b43: {Name:mk55f9242ef71fa03a6046ae5b0bd7b8033f46c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:43.339437 1744024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.key.e2d35b43 ...
	I0827 22:41:43.339458 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.key.e2d35b43: {Name:mkb98e112f95a93e5e4b82cfcfb27cf6f2c5dd59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:43.340000 1744024 certs.go:381] copying /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.crt.e2d35b43 -> /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.crt
	I0827 22:41:43.340117 1744024 certs.go:385] copying /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.key.e2d35b43 -> /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.key
	I0827 22:41:43.340179 1744024 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/proxy-client.key
	I0827 22:41:43.340207 1744024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/proxy-client.crt with IP's: []
	I0827 22:41:44.206584 1744024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/proxy-client.crt ...
	I0827 22:41:44.206625 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/proxy-client.crt: {Name:mk2cca9a77274c9330d8d97918d287a2d4415fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:44.206820 1744024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/proxy-client.key ...
	I0827 22:41:44.206835 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/proxy-client.key: {Name:mka1dc64c84a4597bc9cd9ced8fa0c7fffc8d549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:44.207039 1744024 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1737862/.minikube/certs/ca-key.pem (1679 bytes)
	I0827 22:41:44.207086 1744024 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1737862/.minikube/certs/ca.pem (1078 bytes)
	I0827 22:41:44.207126 1744024 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1737862/.minikube/certs/cert.pem (1123 bytes)
	I0827 22:41:44.207155 1744024 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1737862/.minikube/certs/key.pem (1679 bytes)
	I0827 22:41:44.208112 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 22:41:44.237295 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0827 22:41:44.262289 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 22:41:44.287532 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0827 22:41:44.311428 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0827 22:41:44.335637 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0827 22:41:44.360189 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 22:41:44.385508 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 22:41:44.410484 1744024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1737862/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 22:41:44.435142 1744024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 22:41:44.454379 1744024 ssh_runner.go:195] Run: openssl version
	I0827 22:41:44.460181 1744024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 22:41:44.470591 1744024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:41:44.474199 1744024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 22:41 /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:41:44.474267 1744024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 22:41:44.481777 1744024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 22:41:44.491670 1744024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 22:41:44.494905 1744024 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0827 22:41:44.494967 1744024 kubeadm.go:392] StartCluster: {Name:addons-958846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-958846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:41:44.495117 1744024 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0827 22:41:44.511243 1744024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 22:41:44.520955 1744024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 22:41:44.530286 1744024 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0827 22:41:44.530395 1744024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 22:41:44.541716 1744024 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 22:41:44.541776 1744024 kubeadm.go:157] found existing configuration files:
	
	I0827 22:41:44.541861 1744024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 22:41:44.551206 1744024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 22:41:44.551317 1744024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 22:41:44.559721 1744024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 22:41:44.569027 1744024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 22:41:44.569191 1744024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 22:41:44.578325 1744024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 22:41:44.587987 1744024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 22:41:44.588128 1744024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 22:41:44.597966 1744024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 22:41:44.606798 1744024 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 22:41:44.606876 1744024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 22:41:44.615628 1744024 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0827 22:41:44.659973 1744024 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0827 22:41:44.660296 1744024 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 22:41:44.682553 1744024 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0827 22:41:44.682635 1744024 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0827 22:41:44.682677 1744024 kubeadm.go:310] OS: Linux
	I0827 22:41:44.682725 1744024 kubeadm.go:310] CGROUPS_CPU: enabled
	I0827 22:41:44.682777 1744024 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0827 22:41:44.682827 1744024 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0827 22:41:44.682878 1744024 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0827 22:41:44.682928 1744024 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0827 22:41:44.682979 1744024 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0827 22:41:44.683028 1744024 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0827 22:41:44.683079 1744024 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0827 22:41:44.683126 1744024 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0827 22:41:44.744644 1744024 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 22:41:44.744757 1744024 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 22:41:44.744851 1744024 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0827 22:41:44.757063 1744024 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 22:41:44.759417 1744024 out.go:235]   - Generating certificates and keys ...
	I0827 22:41:44.759619 1744024 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 22:41:44.759728 1744024 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 22:41:45.495316 1744024 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0827 22:41:45.765436 1744024 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0827 22:41:46.315064 1744024 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0827 22:41:47.338262 1744024 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0827 22:41:47.975998 1744024 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0827 22:41:47.976318 1744024 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-958846 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0827 22:41:48.629848 1744024 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0827 22:41:48.630183 1744024 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-958846 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0827 22:41:49.026229 1744024 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0827 22:41:50.244792 1744024 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0827 22:41:50.561690 1744024 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0827 22:41:50.562202 1744024 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 22:41:50.806201 1744024 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 22:41:51.475833 1744024 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0827 22:41:51.781063 1744024 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 22:41:52.868873 1744024 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 22:41:53.187575 1744024 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 22:41:53.188471 1744024 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 22:41:53.191908 1744024 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 22:41:53.193568 1744024 out.go:235]   - Booting up control plane ...
	I0827 22:41:53.193666 1744024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 22:41:53.193741 1744024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 22:41:53.194924 1744024 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 22:41:53.207313 1744024 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 22:41:53.214162 1744024 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 22:41:53.214216 1744024 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 22:41:53.307463 1744024 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0827 22:41:53.307580 1744024 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0827 22:41:54.309794 1744024 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002428531s
	I0827 22:41:54.309891 1744024 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0827 22:42:00.812035 1744024 kubeadm.go:310] [api-check] The API server is healthy after 6.502242715s
	I0827 22:42:00.831945 1744024 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0827 22:42:00.845932 1744024 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0827 22:42:00.867471 1744024 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0827 22:42:00.867670 1744024 kubeadm.go:310] [mark-control-plane] Marking the node addons-958846 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0827 22:42:00.877474 1744024 kubeadm.go:310] [bootstrap-token] Using token: ofq5e9.2ojwhu2ohid3bfsk
	I0827 22:42:00.878741 1744024 out.go:235]   - Configuring RBAC rules ...
	I0827 22:42:00.878871 1744024 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0827 22:42:00.884197 1744024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0827 22:42:00.892236 1744024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0827 22:42:00.895713 1744024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0827 22:42:00.900191 1744024 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0827 22:42:00.903965 1744024 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0827 22:42:01.220798 1744024 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0827 22:42:01.653392 1744024 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0827 22:42:02.221156 1744024 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0827 22:42:02.222354 1744024 kubeadm.go:310] 
	I0827 22:42:02.222437 1744024 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0827 22:42:02.222451 1744024 kubeadm.go:310] 
	I0827 22:42:02.222532 1744024 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0827 22:42:02.222551 1744024 kubeadm.go:310] 
	I0827 22:42:02.222577 1744024 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0827 22:42:02.222641 1744024 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0827 22:42:02.222694 1744024 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0827 22:42:02.222712 1744024 kubeadm.go:310] 
	I0827 22:42:02.222764 1744024 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0827 22:42:02.222774 1744024 kubeadm.go:310] 
	I0827 22:42:02.222823 1744024 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0827 22:42:02.222830 1744024 kubeadm.go:310] 
	I0827 22:42:02.222881 1744024 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0827 22:42:02.222960 1744024 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0827 22:42:02.223040 1744024 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0827 22:42:02.223049 1744024 kubeadm.go:310] 
	I0827 22:42:02.223130 1744024 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0827 22:42:02.223208 1744024 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0827 22:42:02.223221 1744024 kubeadm.go:310] 
	I0827 22:42:02.223303 1744024 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ofq5e9.2ojwhu2ohid3bfsk \
	I0827 22:42:02.223410 1744024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0cbfc44babedd9008c313aa10ae56268b284c3d6bf0df46ea48503305a6cd245 \
	I0827 22:42:02.223433 1744024 kubeadm.go:310] 	--control-plane 
	I0827 22:42:02.223441 1744024 kubeadm.go:310] 
	I0827 22:42:02.223523 1744024 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0827 22:42:02.223532 1744024 kubeadm.go:310] 
	I0827 22:42:02.223611 1744024 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ofq5e9.2ojwhu2ohid3bfsk \
	I0827 22:42:02.223713 1744024 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0cbfc44babedd9008c313aa10ae56268b284c3d6bf0df46ea48503305a6cd245 
	I0827 22:42:02.228300 1744024 kubeadm.go:310] W0827 22:41:44.656371    1821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0827 22:42:02.228710 1744024 kubeadm.go:310] W0827 22:41:44.657484    1821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0827 22:42:02.228954 1744024 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0827 22:42:02.229068 1744024 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0827 22:42:02.229091 1744024 cni.go:84] Creating CNI manager for ""
	I0827 22:42:02.229107 1744024 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 22:42:02.231595 1744024 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0827 22:42:02.234193 1744024 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0827 22:42:02.243145 1744024 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0827 22:42:02.265539 1744024 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 22:42:02.265626 1744024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:42:02.265672 1744024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-958846 minikube.k8s.io/updated_at=2024_08_27T22_42_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=addons-958846 minikube.k8s.io/primary=true
	I0827 22:42:02.429717 1744024 ops.go:34] apiserver oom_adj: -16
	I0827 22:42:02.429855 1744024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:42:02.930143 1744024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:42:03.430890 1744024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:42:03.930795 1744024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:42:04.430637 1744024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:42:04.930085 1744024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:42:05.430247 1744024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:42:05.930392 1744024 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 22:42:06.037389 1744024 kubeadm.go:1113] duration metric: took 3.77182511s to wait for elevateKubeSystemPrivileges
	I0827 22:42:06.037423 1744024 kubeadm.go:394] duration metric: took 21.542470532s to StartCluster
	I0827 22:42:06.037440 1744024 settings.go:142] acquiring lock: {Name:mk73e3c0d6f362f1eda5a15aa9a27171c53be66d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:42:06.037566 1744024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19522-1737862/kubeconfig
	I0827 22:42:06.037996 1744024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/kubeconfig: {Name:mka7362d3e48058b15da36792d75563080fa18ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:42:06.038703 1744024 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0827 22:42:06.038812 1744024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0827 22:42:06.039076 1744024 config.go:182] Loaded profile config "addons-958846": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 22:42:06.039113 1744024 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0827 22:42:06.039201 1744024 addons.go:69] Setting yakd=true in profile "addons-958846"
	I0827 22:42:06.039230 1744024 addons.go:234] Setting addon yakd=true in "addons-958846"
	I0827 22:42:06.039261 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.039748 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.040253 1744024 addons.go:69] Setting inspektor-gadget=true in profile "addons-958846"
	I0827 22:42:06.040289 1744024 addons.go:234] Setting addon inspektor-gadget=true in "addons-958846"
	I0827 22:42:06.040297 1744024 addons.go:69] Setting metrics-server=true in profile "addons-958846"
	I0827 22:42:06.040322 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.040327 1744024 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-958846"
	I0827 22:42:06.040344 1744024 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-958846"
	I0827 22:42:06.040371 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.040784 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.040824 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.041193 1744024 addons.go:69] Setting registry=true in profile "addons-958846"
	I0827 22:42:06.041225 1744024 addons.go:234] Setting addon registry=true in "addons-958846"
	I0827 22:42:06.041258 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.041654 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.040322 1744024 addons.go:234] Setting addon metrics-server=true in "addons-958846"
	I0827 22:42:06.043704 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.044155 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.044906 1744024 addons.go:69] Setting storage-provisioner=true in profile "addons-958846"
	I0827 22:42:06.044953 1744024 addons.go:234] Setting addon storage-provisioner=true in "addons-958846"
	I0827 22:42:06.044994 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.045437 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.052741 1744024 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-958846"
	I0827 22:42:06.052801 1744024 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-958846"
	I0827 22:42:06.053164 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.054724 1744024 addons.go:69] Setting cloud-spanner=true in profile "addons-958846"
	I0827 22:42:06.054765 1744024 addons.go:234] Setting addon cloud-spanner=true in "addons-958846"
	I0827 22:42:06.054806 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.055236 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.074362 1744024 addons.go:69] Setting volcano=true in profile "addons-958846"
	I0827 22:42:06.074409 1744024 addons.go:234] Setting addon volcano=true in "addons-958846"
	I0827 22:42:06.074453 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.074943 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.088729 1744024 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-958846"
	I0827 22:42:06.088807 1744024 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-958846"
	I0827 22:42:06.088839 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.089282 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.098718 1744024 addons.go:69] Setting volumesnapshots=true in profile "addons-958846"
	I0827 22:42:06.098767 1744024 addons.go:234] Setting addon volumesnapshots=true in "addons-958846"
	I0827 22:42:06.098806 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.099294 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.120066 1744024 addons.go:69] Setting default-storageclass=true in profile "addons-958846"
	I0827 22:42:06.120118 1744024 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-958846"
	I0827 22:42:06.120443 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.139804 1744024 addons.go:69] Setting gcp-auth=true in profile "addons-958846"
	I0827 22:42:06.139875 1744024 mustload.go:65] Loading cluster: addons-958846
	I0827 22:42:06.140070 1744024 config.go:182] Loaded profile config "addons-958846": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 22:42:06.140362 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.142881 1744024 out.go:177] * Verifying Kubernetes components...
	I0827 22:42:06.167491 1744024 addons.go:69] Setting ingress=true in profile "addons-958846"
	I0827 22:42:06.167542 1744024 addons.go:234] Setting addon ingress=true in "addons-958846"
	I0827 22:42:06.167586 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.168148 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.197827 1744024 addons.go:69] Setting ingress-dns=true in profile "addons-958846"
	I0827 22:42:06.197879 1744024 addons.go:234] Setting addon ingress-dns=true in "addons-958846"
	I0827 22:42:06.197927 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.198394 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.239307 1744024 addons.go:234] Setting addon default-storageclass=true in "addons-958846"
	I0827 22:42:06.239355 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.239781 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.250280 1744024 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 22:42:06.252638 1744024 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0827 22:42:06.263355 1744024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 22:42:06.263707 1744024 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0827 22:42:06.264775 1744024 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-958846"
	I0827 22:42:06.264813 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.265231 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:06.272919 1744024 out.go:177]   - Using image docker.io/registry:2.8.3
	I0827 22:42:06.274411 1744024 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0827 22:42:06.274567 1744024 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0827 22:42:06.275593 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0827 22:42:06.271797 1744024 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0827 22:42:06.271789 1744024 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 22:42:06.275716 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 22:42:06.275809 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.279225 1744024 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0827 22:42:06.279785 1744024 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0827 22:42:06.280187 1744024 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0827 22:42:06.285272 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.304167 1744024 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0827 22:42:06.304300 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.318409 1744024 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0827 22:42:06.318489 1744024 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0827 22:42:06.318585 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.324881 1744024 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0827 22:42:06.324952 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0827 22:42:06.325047 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.325219 1744024 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0827 22:42:06.325254 1744024 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0827 22:42:06.325307 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.335009 1744024 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0827 22:42:06.335251 1744024 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0827 22:42:06.335266 1744024 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0827 22:42:06.335345 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.336419 1744024 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0827 22:42:06.337634 1744024 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0827 22:42:06.337650 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0827 22:42:06.337729 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.364876 1744024 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0827 22:42:06.367649 1744024 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0827 22:42:06.373636 1744024 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0827 22:42:06.373663 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0827 22:42:06.373734 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.384041 1744024 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0827 22:42:06.387074 1744024 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0827 22:42:06.387132 1744024 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0827 22:42:06.388744 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:06.394031 1744024 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0827 22:42:06.394054 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0827 22:42:06.394122 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.396880 1744024 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0827 22:42:06.400350 1744024 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0827 22:42:06.411739 1744024 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0827 22:42:06.416720 1744024 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0827 22:42:06.419822 1744024 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0827 22:42:06.422572 1744024 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0827 22:42:06.426173 1744024 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0827 22:42:06.429728 1744024 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0827 22:42:06.429756 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0827 22:42:06.429830 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.442356 1744024 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0827 22:42:06.450223 1744024 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0827 22:42:06.452740 1744024 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0827 22:42:06.452766 1744024 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0827 22:42:06.452843 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.498107 1744024 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 22:42:06.498128 1744024 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 22:42:06.498193 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.511872 1744024 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0827 22:42:06.526261 1744024 out.go:177]   - Using image docker.io/busybox:stable
	I0827 22:42:06.528806 1744024 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0827 22:42:06.528834 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0827 22:42:06.528907 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:06.529946 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.530724 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.550750 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.552289 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.560554 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.575949 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.578182 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.597320 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.626092 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.640239 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.665956 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.670470 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.671159 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:06.680291 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	W0827 22:42:06.681195 1744024 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0827 22:42:06.681221 1744024 retry.go:31] will retry after 272.159969ms: ssh: handshake failed: EOF
	I0827 22:42:06.797215 1744024 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0827 22:42:06.797315 1744024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 22:42:07.050172 1744024 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0827 22:42:07.050195 1744024 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0827 22:42:07.062487 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0827 22:42:07.214064 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0827 22:42:07.241422 1744024 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0827 22:42:07.241451 1744024 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0827 22:42:07.321541 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0827 22:42:07.333160 1744024 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0827 22:42:07.333202 1744024 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0827 22:42:07.358929 1744024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0827 22:42:07.358956 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0827 22:42:07.406128 1744024 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0827 22:42:07.406216 1744024 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0827 22:42:07.427085 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 22:42:07.515383 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 22:42:07.521890 1744024 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0827 22:42:07.521966 1744024 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0827 22:42:07.557441 1744024 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0827 22:42:07.557517 1744024 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0827 22:42:07.694799 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0827 22:42:07.716505 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0827 22:42:07.731912 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0827 22:42:07.838414 1744024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0827 22:42:07.838438 1744024 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0827 22:42:07.876853 1744024 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0827 22:42:07.876876 1744024 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0827 22:42:07.880893 1744024 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0827 22:42:07.880916 1744024 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0827 22:42:08.082775 1744024 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0827 22:42:08.082873 1744024 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0827 22:42:08.132618 1744024 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0827 22:42:08.132692 1744024 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0827 22:42:08.142684 1744024 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0827 22:42:08.142762 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0827 22:42:08.162904 1744024 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0827 22:42:08.162983 1744024 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0827 22:42:08.375602 1744024 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 22:42:08.375680 1744024 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0827 22:42:08.404813 1744024 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0827 22:42:08.404892 1744024 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0827 22:42:08.408839 1744024 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0827 22:42:08.408917 1744024 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0827 22:42:08.482139 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0827 22:42:08.487363 1744024 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0827 22:42:08.487437 1744024 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0827 22:42:08.564913 1744024 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0827 22:42:08.564941 1744024 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0827 22:42:08.651954 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 22:42:08.652604 1744024 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0827 22:42:08.652623 1744024 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0827 22:42:08.698836 1744024 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0827 22:42:08.698871 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0827 22:42:08.714566 1744024 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0827 22:42:08.714592 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0827 22:42:08.838886 1744024 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0827 22:42:08.838913 1744024 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0827 22:42:08.943154 1744024 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0827 22:42:08.943180 1744024 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0827 22:42:09.002565 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0827 22:42:09.103788 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0827 22:42:09.291335 1744024 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0827 22:42:09.291360 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0827 22:42:09.302417 1744024 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0827 22:42:09.302444 1744024 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0827 22:42:09.621771 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0827 22:42:09.701046 1744024 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0827 22:42:09.701072 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0827 22:42:09.832753 1744024 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.035414117s)
	I0827 22:42:09.833625 1744024 node_ready.go:35] waiting up to 6m0s for node "addons-958846" to be "Ready" ...
	I0827 22:42:09.833849 1744024 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.036606928s)
	I0827 22:42:09.833870 1744024 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0827 22:42:09.834765 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.772249775s)
	I0827 22:42:09.841816 1744024 node_ready.go:49] node "addons-958846" has status "Ready":"True"
	I0827 22:42:09.841844 1744024 node_ready.go:38] duration metric: took 8.183978ms for node "addons-958846" to be "Ready" ...
	I0827 22:42:09.841857 1744024 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 22:42:09.853102 1744024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ff4rh" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:09.874099 1744024 pod_ready.go:93] pod "coredns-6f6b679f8f-ff4rh" in "kube-system" namespace has status "Ready":"True"
	I0827 22:42:09.874131 1744024 pod_ready.go:82] duration metric: took 20.989725ms for pod "coredns-6f6b679f8f-ff4rh" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:09.874145 1744024 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t5bhl" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:10.020729 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.80662348s)
	I0827 22:42:10.270019 1744024 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0827 22:42:10.270050 1744024 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0827 22:42:10.368200 1744024 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-958846" context rescaled to 1 replicas
	I0827 22:42:10.881034 1744024 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0827 22:42:10.881060 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0827 22:42:11.882139 1744024 pod_ready.go:103] pod "coredns-6f6b679f8f-t5bhl" in "kube-system" namespace has status "Ready":"False"
	I0827 22:42:12.155504 1744024 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0827 22:42:12.155532 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0827 22:42:12.285011 1744024 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0827 22:42:12.285039 1744024 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0827 22:42:12.663309 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0827 22:42:13.399350 1744024 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0827 22:42:13.399484 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:13.424958 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:14.384890 1744024 pod_ready.go:103] pod "coredns-6f6b679f8f-t5bhl" in "kube-system" namespace has status "Ready":"False"
	I0827 22:42:14.476754 1744024 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0827 22:42:14.765572 1744024 addons.go:234] Setting addon gcp-auth=true in "addons-958846"
	I0827 22:42:14.765675 1744024 host.go:66] Checking if "addons-958846" exists ...
	I0827 22:42:14.766150 1744024 cli_runner.go:164] Run: docker container inspect addons-958846 --format={{.State.Status}}
	I0827 22:42:14.788051 1744024 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0827 22:42:14.788108 1744024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-958846
	I0827 22:42:14.814049 1744024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/addons-958846/id_rsa Username:docker}
	I0827 22:42:16.882573 1744024 pod_ready.go:103] pod "coredns-6f6b679f8f-t5bhl" in "kube-system" namespace has status "Ready":"False"
	I0827 22:42:18.939290 1744024 pod_ready.go:103] pod "coredns-6f6b679f8f-t5bhl" in "kube-system" namespace has status "Ready":"False"
	I0827 22:42:19.362129 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.04054568s)
	I0827 22:42:19.362220 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.935057238s)
	I0827 22:42:19.362282 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.846828352s)
	I0827 22:42:19.362320 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.667499621s)
	I0827 22:42:19.362346 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.64582242s)
	I0827 22:42:19.362432 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.630498518s)
	I0827 22:42:19.363229 1744024 addons.go:475] Verifying addon ingress=true in "addons-958846"
	I0827 22:42:19.362454 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.880239368s)
	I0827 22:42:19.363546 1744024 addons.go:475] Verifying addon registry=true in "addons-958846"
	I0827 22:42:19.362533 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.710549993s)
	I0827 22:42:19.363703 1744024 addons.go:475] Verifying addon metrics-server=true in "addons-958846"
	I0827 22:42:19.362565 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.359971683s)
	I0827 22:42:19.362638 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.258821994s)
	I0827 22:42:19.362702 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.740903936s)
	W0827 22:42:19.365373 1744024 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0827 22:42:19.365398 1744024 retry.go:31] will retry after 251.756199ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0827 22:42:19.366079 1744024 out.go:177] * Verifying ingress addon...
	I0827 22:42:19.366204 1744024 out.go:177] * Verifying registry addon...
	I0827 22:42:19.368129 1744024 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-958846 service yakd-dashboard -n yakd-dashboard
	
	I0827 22:42:19.370731 1744024 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0827 22:42:19.371686 1744024 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0827 22:42:19.444739 1744024 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0827 22:42:19.444767 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:19.444870 1744024 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0827 22:42:19.444909 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0827 22:42:19.452646 1744024 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0827 22:42:19.617298 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0827 22:42:19.883849 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:19.884936 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:19.921492 1744024 pod_ready.go:98] pod "coredns-6f6b679f8f-t5bhl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 22:42:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 22:42:06 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 22:42:06 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 22:42:06 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 22:42:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-27 22:42:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-27 22:42:07 +0000 UTC,FinishedAt:2024-08-27 22:42:18 +0000 UTC,ContainerID:docker://b4eb6ac74b18d1c59baf6942c12a14f7c981c62381b854eb53d5137bcded0ce9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://b4eb6ac74b18d1c59baf6942c12a14f7c981c62381b854eb53d5137bcded0ce9 Started:0x4001b597f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x40017cdaf0} {Name:kube-api-access-gm92g MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0x40017cdb00}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0827 22:42:19.921578 1744024 pod_ready.go:82] duration metric: took 10.047424499s for pod "coredns-6f6b679f8f-t5bhl" in "kube-system" namespace to be "Ready" ...
	E0827 22:42:19.921604 1744024 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-t5bhl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 22:42:19 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 22:42:06 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 22:42:06 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 22:42:06 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-27 22:42:06 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-27 22:42:06 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-27 22:42:07 +0000 UTC,FinishedAt:2024-08-27 22:42:18 +0000 UTC,ContainerID:docker://b4eb6ac74b18d1c59baf6942c12a14f7c981c62381b854eb53d5137bcded0ce9,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://b4eb6ac74b18d1c59baf6942c12a14f7c981c62381b854eb53d5137bcded0ce9 Started:0x4001b597f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x40017cdaf0} {Name:kube-api-access-gm92g MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x40017cdb00}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0827 22:42:19.921643 1744024 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-958846" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:19.937040 1744024 pod_ready.go:93] pod "etcd-addons-958846" in "kube-system" namespace has status "Ready":"True"
	I0827 22:42:19.937115 1744024 pod_ready.go:82] duration metric: took 15.440125ms for pod "etcd-addons-958846" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:19.937142 1744024 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-958846" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:19.953511 1744024 pod_ready.go:93] pod "kube-apiserver-addons-958846" in "kube-system" namespace has status "Ready":"True"
	I0827 22:42:19.953583 1744024 pod_ready.go:82] duration metric: took 16.418837ms for pod "kube-apiserver-addons-958846" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:19.953611 1744024 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-958846" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:19.963821 1744024 pod_ready.go:93] pod "kube-controller-manager-addons-958846" in "kube-system" namespace has status "Ready":"True"
	I0827 22:42:19.963894 1744024 pod_ready.go:82] duration metric: took 10.261251ms for pod "kube-controller-manager-addons-958846" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:19.963920 1744024 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pcp6w" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:19.977072 1744024 pod_ready.go:93] pod "kube-proxy-pcp6w" in "kube-system" namespace has status "Ready":"True"
	I0827 22:42:19.977159 1744024 pod_ready.go:82] duration metric: took 13.217207ms for pod "kube-proxy-pcp6w" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:19.977186 1744024 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-958846" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:20.228614 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.565209835s)
	I0827 22:42:20.228651 1744024 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-958846"
	I0827 22:42:20.228811 1744024 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.440739804s)
	I0827 22:42:20.232032 1744024 out.go:177] * Verifying csi-hostpath-driver addon...
	I0827 22:42:20.232037 1744024 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0827 22:42:20.235015 1744024 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0827 22:42:20.236015 1744024 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0827 22:42:20.238271 1744024 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0827 22:42:20.238329 1744024 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0827 22:42:20.244230 1744024 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0827 22:42:20.244253 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:20.278863 1744024 pod_ready.go:93] pod "kube-scheduler-addons-958846" in "kube-system" namespace has status "Ready":"True"
	I0827 22:42:20.278932 1744024 pod_ready.go:82] duration metric: took 301.723747ms for pod "kube-scheduler-addons-958846" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:20.278962 1744024 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ggb2x" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:20.360944 1744024 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0827 22:42:20.360972 1744024 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0827 22:42:20.377163 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:20.379909 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:20.526968 1744024 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0827 22:42:20.526994 1744024 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0827 22:42:20.614932 1744024 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0827 22:42:20.743076 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:20.879811 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:20.881058 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:21.242298 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:21.376800 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:21.377690 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:21.741275 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:21.886316 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:21.887458 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:22.166136 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.548785443s)
	I0827 22:42:22.166284 1744024 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.551259032s)
	I0827 22:42:22.169894 1744024 addons.go:475] Verifying addon gcp-auth=true in "addons-958846"
	I0827 22:42:22.172841 1744024 out.go:177] * Verifying gcp-auth addon...
	I0827 22:42:22.176298 1744024 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0827 22:42:22.179535 1744024 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0827 22:42:22.282747 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:22.287475 1744024 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ggb2x" in "kube-system" namespace has status "Ready":"False"
	I0827 22:42:22.376164 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:22.378645 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:22.784213 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:22.877620 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:22.878454 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:23.241474 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:23.377394 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:23.379518 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:23.741221 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:23.876927 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:23.879267 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:24.282791 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:24.376868 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:24.377390 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:24.741608 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:24.786012 1744024 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-ggb2x" in "kube-system" namespace has status "Ready":"True"
	I0827 22:42:24.786039 1744024 pod_ready.go:82] duration metric: took 4.507055127s for pod "nvidia-device-plugin-daemonset-ggb2x" in "kube-system" namespace to be "Ready" ...
	I0827 22:42:24.786049 1744024 pod_ready.go:39] duration metric: took 14.944180463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 22:42:24.786070 1744024 api_server.go:52] waiting for apiserver process to appear ...
	I0827 22:42:24.786135 1744024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 22:42:24.801156 1744024 api_server.go:72] duration metric: took 18.762405938s to wait for apiserver process to appear ...
	I0827 22:42:24.801184 1744024 api_server.go:88] waiting for apiserver healthz status ...
	I0827 22:42:24.801205 1744024 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0827 22:42:24.809936 1744024 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0827 22:42:24.811228 1744024 api_server.go:141] control plane version: v1.31.0
	I0827 22:42:24.811254 1744024 api_server.go:131] duration metric: took 10.062765ms to wait for apiserver health ...
	I0827 22:42:24.811264 1744024 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 22:42:24.821625 1744024 system_pods.go:59] 17 kube-system pods found
	I0827 22:42:24.821672 1744024 system_pods.go:61] "coredns-6f6b679f8f-ff4rh" [a60201e2-0d74-47e1-98b4-b0e7562ab41e] Running
	I0827 22:42:24.821683 1744024 system_pods.go:61] "csi-hostpath-attacher-0" [893ed8a7-4327-46c3-b2b4-b08fc8cfd518] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0827 22:42:24.821692 1744024 system_pods.go:61] "csi-hostpath-resizer-0" [d3a54c1c-c8f8-41c6-a9e5-aabb616e875d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0827 22:42:24.821709 1744024 system_pods.go:61] "csi-hostpathplugin-c2flj" [27ffaff0-9d66-42d2-a2c5-4cbd3005b73d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0827 22:42:24.821719 1744024 system_pods.go:61] "etcd-addons-958846" [53a503b3-c63e-44f1-aa8f-452d408fe303] Running
	I0827 22:42:24.821725 1744024 system_pods.go:61] "kube-apiserver-addons-958846" [9716e59d-7460-4c41-8d18-e4d7a7875e09] Running
	I0827 22:42:24.821735 1744024 system_pods.go:61] "kube-controller-manager-addons-958846" [f1fd5451-da71-420b-9f3b-b02cea127709] Running
	I0827 22:42:24.821746 1744024 system_pods.go:61] "kube-ingress-dns-minikube" [cfde158a-0b5b-411a-b5c1-7ebf9e932f84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0827 22:42:24.821755 1744024 system_pods.go:61] "kube-proxy-pcp6w" [61886f86-3e4c-436c-83e4-4ed6d1b2be04] Running
	I0827 22:42:24.821760 1744024 system_pods.go:61] "kube-scheduler-addons-958846" [01cf13ff-5618-4828-b87a-15741a3af1bb] Running
	I0827 22:42:24.821771 1744024 system_pods.go:61] "metrics-server-8988944d9-hh9mg" [1fac511e-cb11-4c1d-9bf8-70c4b3e623ec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0827 22:42:24.821782 1744024 system_pods.go:61] "nvidia-device-plugin-daemonset-ggb2x" [a1d78076-3b4f-478e-b23f-c467a85cbf00] Running
	I0827 22:42:24.821791 1744024 system_pods.go:61] "registry-6fb4cdfc84-bg785" [f77f0c2c-2c65-4211-879a-30b245bc30e8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0827 22:42:24.821805 1744024 system_pods.go:61] "registry-proxy-46gqr" [4c757aa1-4447-48a4-9113-9bef04a988f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0827 22:42:24.821818 1744024 system_pods.go:61] "snapshot-controller-56fcc65765-87nsp" [03871430-4f64-46bd-9f29-fd4b9ca8d640] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0827 22:42:24.821849 1744024 system_pods.go:61] "snapshot-controller-56fcc65765-f5lth" [4544afb5-4de3-48a4-aa47-3c47b02b6d69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0827 22:42:24.821858 1744024 system_pods.go:61] "storage-provisioner" [4bf46eba-4c06-4ba0-aea4-7d7dc67104bc] Running
	I0827 22:42:24.821869 1744024 system_pods.go:74] duration metric: took 10.598498ms to wait for pod list to return data ...
	I0827 22:42:24.821877 1744024 default_sa.go:34] waiting for default service account to be created ...
	I0827 22:42:24.828790 1744024 default_sa.go:45] found service account: "default"
	I0827 22:42:24.828814 1744024 default_sa.go:55] duration metric: took 6.92474ms for default service account to be created ...
	I0827 22:42:24.828823 1744024 system_pods.go:116] waiting for k8s-apps to be running ...
	I0827 22:42:24.875241 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:24.877565 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:24.884950 1744024 system_pods.go:86] 17 kube-system pods found
	I0827 22:42:24.884995 1744024 system_pods.go:89] "coredns-6f6b679f8f-ff4rh" [a60201e2-0d74-47e1-98b4-b0e7562ab41e] Running
	I0827 22:42:24.885008 1744024 system_pods.go:89] "csi-hostpath-attacher-0" [893ed8a7-4327-46c3-b2b4-b08fc8cfd518] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0827 22:42:24.885016 1744024 system_pods.go:89] "csi-hostpath-resizer-0" [d3a54c1c-c8f8-41c6-a9e5-aabb616e875d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0827 22:42:24.885088 1744024 system_pods.go:89] "csi-hostpathplugin-c2flj" [27ffaff0-9d66-42d2-a2c5-4cbd3005b73d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0827 22:42:24.885094 1744024 system_pods.go:89] "etcd-addons-958846" [53a503b3-c63e-44f1-aa8f-452d408fe303] Running
	I0827 22:42:24.885105 1744024 system_pods.go:89] "kube-apiserver-addons-958846" [9716e59d-7460-4c41-8d18-e4d7a7875e09] Running
	I0827 22:42:24.885121 1744024 system_pods.go:89] "kube-controller-manager-addons-958846" [f1fd5451-da71-420b-9f3b-b02cea127709] Running
	I0827 22:42:24.885153 1744024 system_pods.go:89] "kube-ingress-dns-minikube" [cfde158a-0b5b-411a-b5c1-7ebf9e932f84] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0827 22:42:24.885175 1744024 system_pods.go:89] "kube-proxy-pcp6w" [61886f86-3e4c-436c-83e4-4ed6d1b2be04] Running
	I0827 22:42:24.885188 1744024 system_pods.go:89] "kube-scheduler-addons-958846" [01cf13ff-5618-4828-b87a-15741a3af1bb] Running
	I0827 22:42:24.885196 1744024 system_pods.go:89] "metrics-server-8988944d9-hh9mg" [1fac511e-cb11-4c1d-9bf8-70c4b3e623ec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0827 22:42:24.885203 1744024 system_pods.go:89] "nvidia-device-plugin-daemonset-ggb2x" [a1d78076-3b4f-478e-b23f-c467a85cbf00] Running
	I0827 22:42:24.885210 1744024 system_pods.go:89] "registry-6fb4cdfc84-bg785" [f77f0c2c-2c65-4211-879a-30b245bc30e8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0827 22:42:24.885218 1744024 system_pods.go:89] "registry-proxy-46gqr" [4c757aa1-4447-48a4-9113-9bef04a988f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0827 22:42:24.885231 1744024 system_pods.go:89] "snapshot-controller-56fcc65765-87nsp" [03871430-4f64-46bd-9f29-fd4b9ca8d640] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0827 22:42:24.885253 1744024 system_pods.go:89] "snapshot-controller-56fcc65765-f5lth" [4544afb5-4de3-48a4-aa47-3c47b02b6d69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0827 22:42:24.885263 1744024 system_pods.go:89] "storage-provisioner" [4bf46eba-4c06-4ba0-aea4-7d7dc67104bc] Running
	I0827 22:42:24.885280 1744024 system_pods.go:126] duration metric: took 56.442921ms to wait for k8s-apps to be running ...
	I0827 22:42:24.885293 1744024 system_svc.go:44] waiting for kubelet service to be running ....
	I0827 22:42:24.885362 1744024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 22:42:24.897708 1744024 system_svc.go:56] duration metric: took 12.405417ms WaitForService to wait for kubelet
	I0827 22:42:24.897739 1744024 kubeadm.go:582] duration metric: took 18.858994131s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 22:42:24.897764 1744024 node_conditions.go:102] verifying NodePressure condition ...
	I0827 22:42:25.079485 1744024 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0827 22:42:25.079570 1744024 node_conditions.go:123] node cpu capacity is 2
	I0827 22:42:25.079600 1744024 node_conditions.go:105] duration metric: took 181.801198ms to run NodePressure ...
	I0827 22:42:25.079627 1744024 start.go:241] waiting for startup goroutines ...
	I0827 22:42:25.241128 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:25.377218 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:25.379283 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:25.741913 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:25.875794 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:25.876928 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:26.241104 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:26.377084 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:26.377562 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:26.741485 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:26.876567 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:26.876826 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:27.242347 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:27.377712 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:27.378021 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:27.741179 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:27.876972 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:27.877101 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:28.241018 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:28.377558 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:28.379096 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:28.740768 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:28.874822 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:28.876601 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:29.241324 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:29.375665 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:29.376669 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:29.741418 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:29.879675 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:29.880919 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:30.242783 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:30.377976 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:30.379010 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:30.742392 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:30.877314 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:30.878210 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:31.240888 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:31.377702 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:31.378964 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:31.741269 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:31.875118 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:31.877553 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:32.241400 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:32.377123 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:32.378324 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:32.740730 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:32.876912 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:32.878282 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:33.241418 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:33.374993 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:33.377778 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:33.740852 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:33.875923 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:33.876866 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:34.241393 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:34.376833 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:34.377314 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:34.741153 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:34.875579 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:34.876866 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:35.241186 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:35.375881 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:35.376762 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:35.741341 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:35.875867 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:35.877869 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:36.254761 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:36.383958 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:36.384689 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:36.741315 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:36.876572 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:36.877518 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:37.241436 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:37.377063 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:37.377995 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:37.742516 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:37.875402 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:37.877129 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:38.241152 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:38.375704 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:38.376667 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:38.740979 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:38.875814 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:38.876701 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:39.241206 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:39.375782 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:39.378166 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:39.740979 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:39.876042 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:39.877033 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:40.240912 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:40.385700 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:40.387798 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:40.740569 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:40.878241 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:40.879995 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:41.241509 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:41.376440 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:41.377168 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:41.740739 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:41.874673 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:41.876503 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:42.241638 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:42.376761 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:42.377335 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:42.741437 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:42.876586 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:42.878275 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:43.240411 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:43.374420 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:43.376627 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:43.741389 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:43.878973 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:43.880103 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:44.240737 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:44.375097 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 22:42:44.376021 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:44.740702 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:44.883546 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:44.887454 1744024 kapi.go:107] duration metric: took 25.515763252s to wait for kubernetes.io/minikube-addons=registry ...
	I0827 22:42:45.252289 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:45.374657 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:45.741713 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:45.875251 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:46.242655 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:46.382208 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:46.743573 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:46.875033 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:47.242676 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:47.375532 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:47.741911 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:47.875660 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:48.252965 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:48.376402 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:48.742577 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:48.877271 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:49.283882 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:49.376249 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:49.742438 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:49.875425 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:50.240534 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:50.389346 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:50.741505 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:50.876804 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:51.262036 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:51.389188 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:51.757021 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:51.874933 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:52.241132 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:52.382910 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:52.741342 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:52.875695 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:53.240624 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:53.376372 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:53.740914 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:53.876014 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:54.240652 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:54.375375 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:54.741111 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:54.875684 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:55.287992 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:55.374944 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:55.742134 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:55.875467 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:56.270981 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:56.376885 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:56.740847 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:56.875925 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:57.240907 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:57.375791 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:57.741524 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:57.875554 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:58.241497 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:58.380155 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:58.782494 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:58.875691 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:59.241162 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:59.376330 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:42:59.757997 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:42:59.875807 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:00.249263 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:00.376457 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:00.741151 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:00.875385 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:01.241727 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:01.376256 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:01.742915 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:01.879092 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:02.241345 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:02.375439 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:02.741408 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:02.878443 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:03.242230 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:03.376961 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:03.740967 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:03.875434 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:04.240580 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:04.376307 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:04.741570 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:04.875351 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:05.241181 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:05.374916 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:05.785848 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:05.874827 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:06.282778 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:06.375552 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:06.782123 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:06.876705 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:07.241515 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:07.375431 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:07.793058 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:07.892003 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:08.250972 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:08.375463 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:08.741374 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:08.876402 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:09.241108 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:09.375476 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:09.741177 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:09.885876 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:10.246630 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:10.375805 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:10.782869 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:10.882590 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:11.240709 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:11.375846 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:11.740452 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:11.876405 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:12.240871 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:12.380906 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:12.740644 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:12.877358 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:13.241281 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:13.375587 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:13.741628 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:13.880742 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:14.283375 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:14.383679 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:14.741208 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:14.876074 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:15.241969 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:15.376754 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:15.741807 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:15.875815 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:16.242836 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:16.376320 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:16.741522 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:16.875582 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:17.241515 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:17.375675 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:17.742081 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:17.875965 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:18.241502 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 22:43:18.375841 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:18.740756 1744024 kapi.go:107] duration metric: took 58.504735526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0827 22:43:18.875230 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:19.375413 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:19.875649 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:20.375110 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:20.876196 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:21.376435 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:21.875694 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:22.375825 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:22.875915 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:23.378464 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:23.877675 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:24.375127 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:24.875821 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:25.374925 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:25.874997 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:26.376852 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:26.874893 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:27.375781 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:27.880313 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:28.376796 1744024 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 22:43:28.875572 1744024 kapi.go:107] duration metric: took 1m9.504835339s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0827 22:43:44.180277 1744024 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0827 22:43:44.180305 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:44.680042 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:45.181173 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:45.680508 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:46.180673 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:46.680830 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:47.179865 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:47.679539 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:48.180402 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:48.680521 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:49.180720 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:49.680835 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:50.180025 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:50.679538 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:51.179922 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:51.680067 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:52.180189 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:52.680144 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:53.180356 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:53.680355 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:54.180332 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:54.679672 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:55.180657 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:55.680743 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:56.179986 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:56.679851 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:57.181548 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:57.679920 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:58.179536 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:58.680524 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:59.180278 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:43:59.679709 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:00.182860 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:00.679642 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:01.180946 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:01.680945 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:02.180108 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:02.680284 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:03.180764 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:03.680631 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:04.180725 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:04.680552 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:05.180195 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:05.679413 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:06.180214 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:06.680205 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:07.180195 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:07.680754 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:08.180317 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:08.679713 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:09.180045 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:09.680118 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:10.180615 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:10.680807 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:11.179723 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:11.680928 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:12.179859 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:12.679587 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:13.179589 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:13.680419 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:14.180539 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:14.680336 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:15.180721 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:15.679585 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:16.180795 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:16.680569 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:17.180670 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:17.680368 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:18.179989 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:18.679548 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:19.180732 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:19.679896 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:20.180008 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:20.679614 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:21.180655 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:21.680793 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:22.181494 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:22.679810 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:23.180601 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:23.680217 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:24.179845 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:24.683743 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:25.183017 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:25.679561 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:26.181558 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:26.680963 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:27.180407 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:27.679450 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:28.179990 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:28.679527 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:29.180676 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:29.680074 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:30.181491 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:30.679904 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:31.179793 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:31.680797 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:32.180879 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:32.680098 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:33.180347 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:33.680551 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:34.180847 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:34.679439 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:35.180391 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:35.680269 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:36.180341 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:36.680521 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:37.180984 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:37.680236 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:38.181024 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:38.679711 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:39.179832 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:39.679493 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:40.180919 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:40.679241 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:41.180356 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:41.680060 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:42.179748 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:42.680535 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:43.181047 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:43.679387 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:44.180544 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:44.679966 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:45.181675 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:45.679753 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:46.180232 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:46.680523 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:47.180610 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:47.680163 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:48.179470 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:48.679844 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:49.180232 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:49.684059 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:50.180683 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:50.679695 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:51.181242 1744024 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 22:44:51.680054 1744024 kapi.go:107] duration metric: took 2m29.50374979s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0827 22:44:51.683172 1744024 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-958846 cluster.
	I0827 22:44:51.686269 1744024 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0827 22:44:51.688798 1744024 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0827 22:44:51.691381 1744024 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, volcano, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0827 22:44:51.694095 1744024 addons.go:510] duration metric: took 2m45.654973467s for enable addons: enabled=[nvidia-device-plugin cloud-spanner volcano storage-provisioner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0827 22:44:51.694149 1744024 start.go:246] waiting for cluster config update ...
	I0827 22:44:51.694171 1744024 start.go:255] writing updated cluster config ...
	I0827 22:44:51.694959 1744024 ssh_runner.go:195] Run: rm -f paused
	I0827 22:44:52.120527 1744024 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0827 22:44:52.123308 1744024 out.go:177] * Done! kubectl is now configured to use "addons-958846" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 27 22:54:25 addons-958846 dockerd[1288]: time="2024-08-27T22:54:25.321834059Z" level=info msg="ignoring event" container=f10d7457ad9f1ab32063afb2fbd4290bf834a3f4412a6f747b50143f76dd7253 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:25 addons-958846 dockerd[1288]: time="2024-08-27T22:54:25.366024883Z" level=info msg="ignoring event" container=abba178e8178c4159959f957e175bd82a36f47808ba3830fc031c0c4b6b04690 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:26 addons-958846 dockerd[1288]: time="2024-08-27T22:54:26.846207692Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 27 22:54:26 addons-958846 dockerd[1288]: time="2024-08-27T22:54:26.849383359Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 27 22:54:31 addons-958846 dockerd[1288]: time="2024-08-27T22:54:31.898080849Z" level=info msg="ignoring event" container=a5f9f36fa845386375b69015550b412c716236d7d7501bb59e6cf377b6f2c8fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:32 addons-958846 dockerd[1288]: time="2024-08-27T22:54:32.126976758Z" level=info msg="ignoring event" container=a23b05349960e185c8ee44e12f3950ff3f4a33fde466e929897317da6721fdce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:32 addons-958846 cri-dockerd[1547]: time="2024-08-27T22:54:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41abfa0b0c476d51fa9815b3a8ed80c73f33e53500ea0f1de41dc036ec52f7a9/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 27 22:54:33 addons-958846 dockerd[1288]: time="2024-08-27T22:54:33.028338832Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Aug 27 22:54:33 addons-958846 cri-dockerd[1547]: time="2024-08-27T22:54:33Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Aug 27 22:54:33 addons-958846 dockerd[1288]: time="2024-08-27T22:54:33.820021369Z" level=info msg="ignoring event" container=985ac5c31b3ac4c6e10cfd1929d2db5f2623cb9c4e6b85e3190b681728b7bfc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:35 addons-958846 dockerd[1288]: time="2024-08-27T22:54:35.760692530Z" level=info msg="ignoring event" container=41abfa0b0c476d51fa9815b3a8ed80c73f33e53500ea0f1de41dc036ec52f7a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:37 addons-958846 cri-dockerd[1547]: time="2024-08-27T22:54:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f562a525bfa33951b63a8e93ed34489fdca759e91d23030891b7947389b5d0ab/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 27 22:54:38 addons-958846 cri-dockerd[1547]: time="2024-08-27T22:54:38Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Aug 27 22:54:38 addons-958846 dockerd[1288]: time="2024-08-27T22:54:38.730601469Z" level=info msg="ignoring event" container=f9fe8072ff9aceec56f44bc4528c814ca32cee4f52bd17a97ef146f2907879e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:39 addons-958846 dockerd[1288]: time="2024-08-27T22:54:39.961335408Z" level=info msg="ignoring event" container=f562a525bfa33951b63a8e93ed34489fdca759e91d23030891b7947389b5d0ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:41 addons-958846 cri-dockerd[1547]: time="2024-08-27T22:54:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1fa166b8854b659bf2d9a5c17cae2707c5b4499644135f02192652ae16739d72/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 27 22:54:41 addons-958846 dockerd[1288]: time="2024-08-27T22:54:41.721152109Z" level=info msg="ignoring event" container=543830a38b646c59a1fa9a173393d0620792232968132a72c7ad7939ae772c58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:43 addons-958846 dockerd[1288]: time="2024-08-27T22:54:43.050699980Z" level=info msg="ignoring event" container=1fa166b8854b659bf2d9a5c17cae2707c5b4499644135f02192652ae16739d72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:47 addons-958846 dockerd[1288]: time="2024-08-27T22:54:47.355013645Z" level=info msg="ignoring event" container=d83c6b039ba513e8058cfda62aead471d8ca690c7230c988000e08da2e42b2a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:48 addons-958846 dockerd[1288]: time="2024-08-27T22:54:48.031459211Z" level=info msg="ignoring event" container=9ced03b306160f50b4276c19fd9eced20473af3060a917ece70987e1e0f9faa9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:48 addons-958846 dockerd[1288]: time="2024-08-27T22:54:48.327956448Z" level=info msg="ignoring event" container=36dcb9f586b9218bf4c43abf50c6b1955f897488f1caf9de4f647bd7ab7b5368 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:48 addons-958846 dockerd[1288]: time="2024-08-27T22:54:48.346823013Z" level=info msg="ignoring event" container=9cde8e904cfe2027dc4c7865f93349c24d00ecf1202fbf07cfb62afa1f1e1a19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:48 addons-958846 dockerd[1288]: time="2024-08-27T22:54:48.435046593Z" level=info msg="ignoring event" container=bb0c570df89930a216ebf2d54e6c9e8a81098c6f168e31c1b5723706708af034 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:48 addons-958846 dockerd[1288]: time="2024-08-27T22:54:48.657618844Z" level=info msg="ignoring event" container=c2ec699e9cbbbc49e301cba3a663600b52e48954de97c0afc2979e0791a49121 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 27 22:54:48 addons-958846 dockerd[1288]: time="2024-08-27T22:54:48.778308751Z" level=info msg="ignoring event" container=7cc38993a8b8eb31d454933ce664916fccc1bb3b1fa7eab930914973b8648d62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	543830a38b646       fc9db2894f4e4                                                                                                                8 seconds ago       Exited              helper-pod                0                   1fa166b8854b6       helper-pod-delete-pvc-aad4af14-49f8-4159-b469-887f08026e79
	f9fe8072ff9ac       busybox@sha256:9ae97d36d26566ff84e8893c64a6dc4fe8ca6d1144bf5b87b2b85a32def253c7                                              11 seconds ago      Exited              busybox                   0                   f562a525bfa33       test-local-path
	985ac5c31b3ac       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              16 seconds ago      Exited              helper-pod                0                   41abfa0b0c476       helper-pod-create-pvc-aad4af14-49f8-4159-b469-887f08026e79
	66d7ec9da7962       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc            43 seconds ago      Exited              gadget                    7                   ee959e15b5a31       gadget-wbmrh
	6b76e98c95756       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   fa56c0899cc77       gcp-auth-89d5ffd79-ss722
	80b0a83aa3bcd       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   828d8986e50f7       ingress-nginx-controller-bc57996ff-gjdpr
	b0267c4c7f27a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   db8f598a87b27       ingress-nginx-admission-patch-lmm4f
	d2b3158e3f8dc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   a56fec489259f       ingress-nginx-admission-create-6wqt2
	103ff9ee5d5be       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   1bf1d3911bb43       local-path-provisioner-86d989889c-xvjt8
	cbaba0b6525bf       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a        12 minutes ago      Running             metrics-server            0                   36f7afe80380b       metrics-server-8988944d9-hh9mg
	115bc7a09fe26       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   82724d55a51b6       kube-ingress-dns-minikube
	a52ee88405d40       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       0                   3fff198c8f655       storage-provisioner
	c28d76c33e03c       2437cf7621777                                                                                                                12 minutes ago      Running             coredns                   0                   d9c31662c50f1       coredns-6f6b679f8f-ff4rh
	2ffab64e540ff       71d55d66fd4ee                                                                                                                12 minutes ago      Running             kube-proxy                0                   fac0a32f83c09       kube-proxy-pcp6w
	38d7f300d5b0a       cd0f0ae0ec9e0                                                                                                                12 minutes ago      Running             kube-apiserver            0                   657d6fd108850       kube-apiserver-addons-958846
	f58026674ca37       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                      0                   f5ab5d4c7f556       etcd-addons-958846
	c059885d843d8       fcb0683e6bdbd                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   abc4bf6b2332b       kube-controller-manager-addons-958846
	772b75b961d89       fbbbd428abb4d                                                                                                                12 minutes ago      Running             kube-scheduler            0                   238190b46c793       kube-scheduler-addons-958846
	
	
	==> controller_ingress [80b0a83aa3bc] <==
	I0827 22:43:27.875480       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0827 22:43:27.888935       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/arm64"
	I0827 22:43:28.452806       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0827 22:43:28.490433       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0827 22:43:28.522043       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0827 22:43:28.545870       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"30088b31-eedb-4e65-ac96-d0a51c649589", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0827 22:43:28.545959       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"5f0f594f-de89-44e4-aa63-010abdb0e06d", APIVersion:"v1", ResourceVersion:"703", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0827 22:43:28.545971       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"97dcf75f-d69e-44fd-9155-44caec24c899", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0827 22:43:29.728245       7 nginx.go:317] "Starting NGINX process"
	I0827 22:43:29.728332       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0827 22:43:29.728631       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0827 22:43:29.729002       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0827 22:43:29.746478       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0827 22:43:29.746714       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-gjdpr"
	I0827 22:43:29.758857       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-gjdpr" node="addons-958846"
	I0827 22:43:29.770236       7 controller.go:213] "Backend successfully reloaded"
	I0827 22:43:29.770526       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0827 22:43:29.770669       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-gjdpr", UID:"6fa990bb-24fb-4710-80f2-23d430a9fb23", APIVersion:"v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Release:       v1.11.2
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [c28d76c33e03] <==
	[INFO] 10.244.0.7:50747 - 31046 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009553s
	[INFO] 10.244.0.7:46521 - 10983 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002384236s
	[INFO] 10.244.0.7:46521 - 43237 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002798791s
	[INFO] 10.244.0.7:59278 - 12502 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137761s
	[INFO] 10.244.0.7:59278 - 35755 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000100494s
	[INFO] 10.244.0.7:42793 - 58025 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000203507s
	[INFO] 10.244.0.7:42793 - 50093 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000188352s
	[INFO] 10.244.0.7:58776 - 42841 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066853s
	[INFO] 10.244.0.7:58776 - 2396 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061381s
	[INFO] 10.244.0.7:34609 - 47115 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000064302s
	[INFO] 10.244.0.7:34609 - 49417 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000149896s
	[INFO] 10.244.0.7:37220 - 11589 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001675131s
	[INFO] 10.244.0.7:37220 - 39747 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001844038s
	[INFO] 10.244.0.7:36939 - 17199 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000129572s
	[INFO] 10.244.0.7:36939 - 24113 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000194531s
	[INFO] 10.244.0.25:32954 - 17708 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00024646s
	[INFO] 10.244.0.25:42670 - 60084 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00007679s
	[INFO] 10.244.0.25:60528 - 1245 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143045s
	[INFO] 10.244.0.25:50518 - 37183 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000093134s
	[INFO] 10.244.0.25:58329 - 37586 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097565s
	[INFO] 10.244.0.25:50230 - 48583 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079178s
	[INFO] 10.244.0.25:42914 - 7234 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002530972s
	[INFO] 10.244.0.25:49132 - 63429 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00207906s
	[INFO] 10.244.0.25:54624 - 55745 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00194696s
	[INFO] 10.244.0.25:51842 - 37435 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002079438s
	
	
	==> describe nodes <==
	Name:               addons-958846
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-958846
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=addons-958846
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T22_42_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-958846
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 22:41:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-958846
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 22:54:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 22:50:41 +0000   Tue, 27 Aug 2024 22:41:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 22:50:41 +0000   Tue, 27 Aug 2024 22:41:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 22:50:41 +0000   Tue, 27 Aug 2024 22:41:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 22:50:41 +0000   Tue, 27 Aug 2024 22:41:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-958846
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 e79154f67f4c417b939e3bdf9cbabb52
	  System UUID:                01cf150a-bc97-44f2-8873-429906876d87
	  Boot ID:                    02a23870-c237-4235-b674-75b701f2885e
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  gadget                      gadget-wbmrh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-ss722                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  headlamp                    headlamp-57fb76fcdb-g4pbr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-gjdpr    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-ff4rh                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-958846                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-958846                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-958846       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-pcp6w                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-958846                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-8988944d9-hh9mg              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-xvjt8     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-958846 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-958846 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-958846 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-958846 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-958846 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-958846 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-958846 event: Registered Node addons-958846 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [f58026674ca3] <==
	{"level":"info","ts":"2024-08-27T22:41:55.364257Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-27T22:41:55.365323Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-27T22:41:56.208519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-27T22:41:56.208569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-27T22:41:56.208609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-27T22:41:56.208630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-27T22:41:56.208638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-27T22:41:56.208649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-27T22:41:56.208657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-27T22:41:56.212569Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-958846 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-27T22:41:56.212612Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T22:41:56.212892Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:41:56.213922Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T22:41:56.215133Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T22:41:56.236046Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:41:56.236166Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:41:56.236192Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T22:41:56.237191Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-27T22:41:56.237846Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T22:41:56.238686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-27T22:41:56.276551Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T22:41:56.276772Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-27T22:51:57.037789Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1874}
	{"level":"info","ts":"2024-08-27T22:51:57.095975Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1874,"took":"57.327355ms","hash":1137577013,"current-db-size-bytes":8716288,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4882432,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-08-27T22:51:57.096032Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1137577013,"revision":1874,"compact-revision":-1}
	
	
	==> gcp-auth [6b76e98c9575] <==
	2024/08/27 22:45:09 Ready to write response ...
	2024/08/27 22:45:32 Ready to marshal response ...
	2024/08/27 22:45:32 Ready to write response ...
	2024/08/27 22:45:32 Ready to marshal response ...
	2024/08/27 22:45:32 Ready to write response ...
	2024/08/27 22:45:32 Ready to marshal response ...
	2024/08/27 22:45:32 Ready to write response ...
	2024/08/27 22:53:47 Ready to marshal response ...
	2024/08/27 22:53:47 Ready to write response ...
	2024/08/27 22:53:56 Ready to marshal response ...
	2024/08/27 22:53:56 Ready to write response ...
	2024/08/27 22:54:09 Ready to marshal response ...
	2024/08/27 22:54:09 Ready to write response ...
	2024/08/27 22:54:32 Ready to marshal response ...
	2024/08/27 22:54:32 Ready to write response ...
	2024/08/27 22:54:32 Ready to marshal response ...
	2024/08/27 22:54:32 Ready to write response ...
	2024/08/27 22:54:41 Ready to marshal response ...
	2024/08/27 22:54:41 Ready to write response ...
	2024/08/27 22:54:49 Ready to marshal response ...
	2024/08/27 22:54:49 Ready to write response ...
	2024/08/27 22:54:49 Ready to marshal response ...
	2024/08/27 22:54:49 Ready to write response ...
	2024/08/27 22:54:49 Ready to marshal response ...
	2024/08/27 22:54:49 Ready to write response ...
	
	
	==> kernel <==
	 22:54:50 up  6:37,  0 users,  load average: 1.16, 0.98, 1.81
	Linux addons-958846 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [38d7f300d5b0] <==
	I0827 22:45:23.121974       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0827 22:45:23.141059       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0827 22:45:23.505041       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0827 22:45:23.554777       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0827 22:45:23.618801       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0827 22:45:23.698181       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0827 22:45:24.142025       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0827 22:45:24.273765       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0827 22:45:24.335414       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0827 22:45:24.358409       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0827 22:45:24.619544       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0827 22:45:24.940903       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0827 22:54:03.388399       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0827 22:54:24.832271       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0827 22:54:24.832315       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0827 22:54:24.861963       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0827 22:54:24.862010       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0827 22:54:24.923419       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0827 22:54:24.923484       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0827 22:54:25.001853       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0827 22:54:25.001913       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0827 22:54:25.926106       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0827 22:54:26.002019       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0827 22:54:26.125494       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0827 22:54:49.624029       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.178.25"}
	
	
	==> kube-controller-manager [c059885d843d] <==
	E0827 22:54:35.511211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0827 22:54:35.647491       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0827 22:54:35.647536       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0827 22:54:36.345360       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0827 22:54:36.345398       1 shared_informer.go:320] Caches are synced for resource quota
	I0827 22:54:36.580758       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0827 22:54:36.580805       1 shared_informer.go:320] Caches are synced for garbage collector
	W0827 22:54:42.592346       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0827 22:54:42.592393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0827 22:54:44.037419       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0827 22:54:44.037465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0827 22:54:45.090439       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0827 22:54:45.090490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0827 22:54:47.789518       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0827 22:54:47.789563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0827 22:54:47.960785       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="4.479µs"
	I0827 22:54:48.233655       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="4.447µs"
	I0827 22:54:49.674701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="23.451677ms"
	E0827 22:54:49.674735       1 replica_set.go:560] "Unhandled Error" err="sync \"headlamp/headlamp-57fb76fcdb\" failed with pods \"headlamp-57fb76fcdb-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	W0827 22:54:49.713230       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0827 22:54:49.713285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0827 22:54:49.727520       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="51.666768ms"
	I0827 22:54:49.742601       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="14.903683ms"
	I0827 22:54:49.760517       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="17.870854ms"
	I0827 22:54:49.760626       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="70.505µs"
	
	
	==> kube-proxy [2ffab64e540f] <==
	I0827 22:42:07.638969       1 server_linux.go:66] "Using iptables proxy"
	I0827 22:42:07.727747       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0827 22:42:07.729682       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 22:42:07.761447       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0827 22:42:07.761512       1 server_linux.go:169] "Using iptables Proxier"
	I0827 22:42:07.763624       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 22:42:07.763965       1 server.go:483] "Version info" version="v1.31.0"
	I0827 22:42:07.763981       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 22:42:07.765449       1 config.go:197] "Starting service config controller"
	I0827 22:42:07.765477       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 22:42:07.765498       1 config.go:104] "Starting endpoint slice config controller"
	I0827 22:42:07.765502       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 22:42:07.768422       1 config.go:326] "Starting node config controller"
	I0827 22:42:07.768458       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 22:42:07.865537       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 22:42:07.865606       1 shared_informer.go:320] Caches are synced for service config
	I0827 22:42:07.868784       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [772b75b961d8] <==
	W0827 22:41:59.025908       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 22:41:59.025933       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0827 22:41:59.027243       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0827 22:41:59.027284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0827 22:41:59.980016       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0827 22:41:59.980159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0827 22:41:59.993685       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0827 22:41:59.993862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:42:00.088278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0827 22:42:00.088339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:42:00.099963       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0827 22:42:00.100998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:42:00.238497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0827 22:42:00.238772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:42:00.276401       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 22:42:00.276764       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0827 22:42:00.283318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0827 22:42:00.283868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:42:00.319984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0827 22:42:00.320811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 22:42:00.337695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0827 22:42:00.338101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 22:42:00.396876       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0827 22:42:00.397215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0827 22:42:03.314688       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 27 22:54:49 addons-958846 kubelet[2351]: E0827 22:54:49.283220    2351 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 9cde8e904cfe2027dc4c7865f93349c24d00ecf1202fbf07cfb62afa1f1e1a19" containerID="9cde8e904cfe2027dc4c7865f93349c24d00ecf1202fbf07cfb62afa1f1e1a19"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.283279    2351 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"9cde8e904cfe2027dc4c7865f93349c24d00ecf1202fbf07cfb62afa1f1e1a19"} err="failed to get container status \"9cde8e904cfe2027dc4c7865f93349c24d00ecf1202fbf07cfb62afa1f1e1a19\": rpc error: code = Unknown desc = Error response from daemon: No such container: 9cde8e904cfe2027dc4c7865f93349c24d00ecf1202fbf07cfb62afa1f1e1a19"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.283307    2351 scope.go:117] "RemoveContainer" containerID="9ced03b306160f50b4276c19fd9eced20473af3060a917ece70987e1e0f9faa9"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.336700    2351 scope.go:117] "RemoveContainer" containerID="9ced03b306160f50b4276c19fd9eced20473af3060a917ece70987e1e0f9faa9"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: E0827 22:54:49.337834    2351 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 9ced03b306160f50b4276c19fd9eced20473af3060a917ece70987e1e0f9faa9" containerID="9ced03b306160f50b4276c19fd9eced20473af3060a917ece70987e1e0f9faa9"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.337875    2351 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"9ced03b306160f50b4276c19fd9eced20473af3060a917ece70987e1e0f9faa9"} err="failed to get container status \"9ced03b306160f50b4276c19fd9eced20473af3060a917ece70987e1e0f9faa9\": rpc error: code = Unknown desc = Error response from daemon: No such container: 9ced03b306160f50b4276c19fd9eced20473af3060a917ece70987e1e0f9faa9"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.337902    2351 scope.go:117] "RemoveContainer" containerID="bb0c570df89930a216ebf2d54e6c9e8a81098c6f168e31c1b5723706708af034"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.363966    2351 scope.go:117] "RemoveContainer" containerID="bb0c570df89930a216ebf2d54e6c9e8a81098c6f168e31c1b5723706708af034"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: E0827 22:54:49.365115    2351 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: bb0c570df89930a216ebf2d54e6c9e8a81098c6f168e31c1b5723706708af034" containerID="bb0c570df89930a216ebf2d54e6c9e8a81098c6f168e31c1b5723706708af034"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.365155    2351 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"bb0c570df89930a216ebf2d54e6c9e8a81098c6f168e31c1b5723706708af034"} err="failed to get container status \"bb0c570df89930a216ebf2d54e6c9e8a81098c6f168e31c1b5723706708af034\": rpc error: code = Unknown desc = Error response from daemon: No such container: bb0c570df89930a216ebf2d54e6c9e8a81098c6f168e31c1b5723706708af034"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.660263    2351 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c757aa1-4447-48a4-9113-9bef04a988f4" path="/var/lib/kubelet/pods/4c757aa1-4447-48a4-9113-9bef04a988f4/volumes"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.662197    2351 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da1c12e8-1196-41d2-bd28-7a786e714a8d" path="/var/lib/kubelet/pods/da1c12e8-1196-41d2-bd28-7a786e714a8d/volumes"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.667652    2351 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ae6a77-5549-4127-afe8-754b69830656" path="/var/lib/kubelet/pods/f5ae6a77-5549-4127-afe8-754b69830656/volumes"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.669990    2351 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77f0c2c-2c65-4211-879a-30b245bc30e8" path="/var/lib/kubelet/pods/f77f0c2c-2c65-4211-879a-30b245bc30e8/volumes"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: E0827 22:54:49.707589    2351 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c757aa1-4447-48a4-9113-9bef04a988f4" containerName="registry-proxy"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: E0827 22:54:49.707624    2351 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="acb717d4-aac2-4b76-9d5b-d1cd13a7605f" containerName="helper-pod"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: E0827 22:54:49.707635    2351 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da1c12e8-1196-41d2-bd28-7a786e714a8d" containerName="cloud-spanner-emulator"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: E0827 22:54:49.707653    2351 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f77f0c2c-2c65-4211-879a-30b245bc30e8" containerName="registry"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.707687    2351 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c757aa1-4447-48a4-9113-9bef04a988f4" containerName="registry-proxy"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.707698    2351 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77f0c2c-2c65-4211-879a-30b245bc30e8" containerName="registry"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.707706    2351 memory_manager.go:354] "RemoveStaleState removing state" podUID="da1c12e8-1196-41d2-bd28-7a786e714a8d" containerName="cloud-spanner-emulator"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.707712    2351 memory_manager.go:354] "RemoveStaleState removing state" podUID="acb717d4-aac2-4b76-9d5b-d1cd13a7605f" containerName="helper-pod"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.813337    2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfjn\" (UniqueName: \"kubernetes.io/projected/fd6580bf-d139-48b0-87a3-a80cd3b1050c-kube-api-access-pwfjn\") pod \"headlamp-57fb76fcdb-g4pbr\" (UID: \"fd6580bf-d139-48b0-87a3-a80cd3b1050c\") " pod="headlamp/headlamp-57fb76fcdb-g4pbr"
	Aug 27 22:54:49 addons-958846 kubelet[2351]: I0827 22:54:49.813389    2351 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fd6580bf-d139-48b0-87a3-a80cd3b1050c-gcp-creds\") pod \"headlamp-57fb76fcdb-g4pbr\" (UID: \"fd6580bf-d139-48b0-87a3-a80cd3b1050c\") " pod="headlamp/headlamp-57fb76fcdb-g4pbr"
	Aug 27 22:54:50 addons-958846 kubelet[2351]: I0827 22:54:50.314970    2351 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dee4fbb4faedcad71f40e95cc44f3f9f9b353529a641d0174527a15b05fc05cc"
	
	
	==> storage-provisioner [a52ee88405d4] <==
	I0827 22:42:14.025507       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0827 22:42:14.047718       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0827 22:42:14.047787       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0827 22:42:14.059178       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0827 22:42:14.061615       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-958846_251a0b9c-600a-40f4-8a4f-6a2c7a844441!
	I0827 22:42:14.072391       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"042006eb-eb70-4ccd-aad6-7d9f39d347e8", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-958846_251a0b9c-600a-40f4-8a4f-6a2c7a844441 became leader
	I0827 22:42:14.162742       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-958846_251a0b9c-600a-40f4-8a4f-6a2c7a844441!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-958846 -n addons-958846
helpers_test.go:261: (dbg) Run:  kubectl --context addons-958846 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox headlamp-57fb76fcdb-g4pbr ingress-nginx-admission-create-6wqt2 ingress-nginx-admission-patch-lmm4f
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-958846 describe pod busybox headlamp-57fb76fcdb-g4pbr ingress-nginx-admission-create-6wqt2 ingress-nginx-admission-patch-lmm4f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-958846 describe pod busybox headlamp-57fb76fcdb-g4pbr ingress-nginx-admission-create-6wqt2 ingress-nginx-admission-patch-lmm4f: exit status 1 (149.886071ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-958846/192.168.49.2
	Start Time:       Tue, 27 Aug 2024 22:45:32 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nnbss (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nnbss:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m19s                   default-scheduler  Successfully assigned default/busybox to addons-958846
	  Normal   Pulling    7m48s (x4 over 9m18s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m18s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m18s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m33s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x21 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "headlamp-57fb76fcdb-g4pbr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-6wqt2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lmm4f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-958846 describe pod busybox headlamp-57fb76fcdb-g4pbr ingress-nginx-admission-create-6wqt2 ingress-nginx-admission-patch-lmm4f: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.64s)

                                                
                                    

Test pass (318/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.85
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.0/json-events 5.18
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
22 TestOffline 86.06
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 222.16
29 TestAddons/serial/Volcano 40.29
31 TestAddons/serial/GCPAuth/Namespaces 0.19
34 TestAddons/parallel/Ingress 20.25
35 TestAddons/parallel/InspektorGadget 10.96
36 TestAddons/parallel/MetricsServer 5.69
39 TestAddons/parallel/CSI 37.5
40 TestAddons/parallel/Headlamp 17.5
41 TestAddons/parallel/CloudSpanner 6.78
42 TestAddons/parallel/LocalPath 9.44
43 TestAddons/parallel/NvidiaDevicePlugin 6.52
44 TestAddons/parallel/Yakd 11.87
45 TestAddons/StoppedEnableDisable 11.2
46 TestCertOptions 43.34
47 TestCertExpiration 250.78
48 TestDockerFlags 47.07
49 TestForceSystemdFlag 45.9
50 TestForceSystemdEnv 45.46
56 TestErrorSpam/setup 32.25
57 TestErrorSpam/start 0.84
58 TestErrorSpam/status 1.12
59 TestErrorSpam/pause 1.42
60 TestErrorSpam/unpause 1.51
61 TestErrorSpam/stop 2.06
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 45.94
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 35.83
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
73 TestFunctional/serial/CacheCmd/cache/add_local 1.05
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
81 TestFunctional/serial/ExtraConfig 41.25
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.26
84 TestFunctional/serial/LogsFileCmd 1.27
85 TestFunctional/serial/InvalidService 5.13
87 TestFunctional/parallel/ConfigCmd 0.53
88 TestFunctional/parallel/DashboardCmd 13.04
89 TestFunctional/parallel/DryRun 0.44
90 TestFunctional/parallel/InternationalLanguage 0.18
91 TestFunctional/parallel/StatusCmd 1.29
95 TestFunctional/parallel/ServiceCmdConnect 6.76
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 27.78
99 TestFunctional/parallel/SSHCmd 0.73
100 TestFunctional/parallel/CpCmd 2
102 TestFunctional/parallel/FileSync 0.4
103 TestFunctional/parallel/CertSync 2.14
107 TestFunctional/parallel/NodeLabels 0.11
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
111 TestFunctional/parallel/License 0.25
112 TestFunctional/parallel/Version/short 0.07
113 TestFunctional/parallel/Version/components 1.18
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.39
119 TestFunctional/parallel/ImageCommands/Setup 0.87
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.24
121 TestFunctional/parallel/DockerEnv/bash 1.33
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.3
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
128 TestFunctional/parallel/ServiceCmd/DeployApp 10.41
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.46
137 TestFunctional/parallel/ServiceCmd/List 0.34
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
140 TestFunctional/parallel/ServiceCmd/Format 0.37
141 TestFunctional/parallel/ServiceCmd/URL 0.38
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
149 TestFunctional/parallel/ProfileCmd/profile_list 0.53
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
151 TestFunctional/parallel/MountCmd/any-port 7.3
152 TestFunctional/parallel/MountCmd/specific-port 2.31
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.31
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 130.66
161 TestMultiControlPlane/serial/DeployApp 43.6
162 TestMultiControlPlane/serial/PingHostFromPods 1.75
163 TestMultiControlPlane/serial/AddWorkerNode 28.44
164 TestMultiControlPlane/serial/NodeLabels 0.11
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
166 TestMultiControlPlane/serial/CopyFile 19.62
167 TestMultiControlPlane/serial/StopSecondaryNode 11.81
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.6
169 TestMultiControlPlane/serial/RestartSecondaryNode 77.28
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 265.91
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.35
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.59
174 TestMultiControlPlane/serial/StopCluster 33.16
175 TestMultiControlPlane/serial/RestartCluster 147.58
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.59
177 TestMultiControlPlane/serial/AddSecondaryNode 47.67
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
181 TestImageBuild/serial/Setup 31.09
182 TestImageBuild/serial/NormalBuild 2
183 TestImageBuild/serial/BuildWithBuildArg 0.98
184 TestImageBuild/serial/BuildWithDockerIgnore 0.81
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.92
189 TestJSONOutput/start/Command 41.18
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.6
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.51
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.76
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 35.64
215 TestKicCustomNetwork/use_default_bridge_network 34.02
216 TestKicExistingNetwork 34.29
217 TestKicCustomSubnet 33.94
218 TestKicStaticIP 35.37
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 72.53
223 TestMountStart/serial/StartWithMountFirst 7.83
224 TestMountStart/serial/VerifyMountFirst 0.29
225 TestMountStart/serial/StartWithMountSecond 8.39
226 TestMountStart/serial/VerifyMountSecond 0.28
227 TestMountStart/serial/DeleteFirst 1.48
228 TestMountStart/serial/VerifyMountPostDelete 0.27
229 TestMountStart/serial/Stop 1.22
230 TestMountStart/serial/RestartStopped 8.48
231 TestMountStart/serial/VerifyMountPostStop 0.27
234 TestMultiNode/serial/FreshStart2Nodes 85.14
235 TestMultiNode/serial/DeployApp2Nodes 41.49
236 TestMultiNode/serial/PingHostFrom2Pods 1.02
237 TestMultiNode/serial/AddNode 16.84
238 TestMultiNode/serial/MultiNodeLabels 0.1
239 TestMultiNode/serial/ProfileList 0.37
240 TestMultiNode/serial/CopyFile 10.71
241 TestMultiNode/serial/StopNode 2.27
242 TestMultiNode/serial/StartAfterStop 11.25
243 TestMultiNode/serial/RestartKeepsNodes 100.89
244 TestMultiNode/serial/DeleteNode 5.69
245 TestMultiNode/serial/StopMultiNode 21.54
246 TestMultiNode/serial/RestartMultiNode 58.08
247 TestMultiNode/serial/ValidateNameConflict 35.77
252 TestPreload 139.5
254 TestScheduledStopUnix 103.59
255 TestSkaffold 117.28
257 TestInsufficientStorage 13.44
258 TestRunningBinaryUpgrade 104.89
260 TestKubernetesUpgrade 380.51
261 TestMissingContainerUpgrade 115.15
263 TestPause/serial/Start 80.43
264 TestPause/serial/SecondStartNoReconfiguration 35.4
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
267 TestNoKubernetes/serial/StartWithK8s 37.59
268 TestPause/serial/Pause 0.8
269 TestPause/serial/VerifyStatus 0.42
270 TestPause/serial/Unpause 0.69
271 TestPause/serial/PauseAgain 0.91
272 TestPause/serial/DeletePaused 2.24
273 TestPause/serial/VerifyDeletedResources 0.59
285 TestNoKubernetes/serial/StartWithStopK8s 19.58
286 TestNoKubernetes/serial/Start 8.78
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
288 TestNoKubernetes/serial/ProfileList 1.2
289 TestNoKubernetes/serial/Stop 1.29
290 TestNoKubernetes/serial/StartNoArgs 9.23
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
292 TestStoppedBinaryUpgrade/Setup 0.68
293 TestStoppedBinaryUpgrade/Upgrade 128.86
294 TestStoppedBinaryUpgrade/MinikubeLogs 1.48
302 TestNetworkPlugins/group/auto/Start 82.18
303 TestNetworkPlugins/group/auto/KubeletFlags 0.51
304 TestNetworkPlugins/group/auto/NetCatPod 10.29
305 TestNetworkPlugins/group/auto/DNS 0.24
306 TestNetworkPlugins/group/auto/Localhost 0.19
307 TestNetworkPlugins/group/auto/HairPin 0.31
308 TestNetworkPlugins/group/kindnet/Start 55.65
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
311 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
312 TestNetworkPlugins/group/kindnet/DNS 0.29
313 TestNetworkPlugins/group/kindnet/Localhost 0.16
314 TestNetworkPlugins/group/kindnet/HairPin 0.2
315 TestNetworkPlugins/group/calico/Start 84.65
316 TestNetworkPlugins/group/custom-flannel/Start 66.08
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.31
319 TestNetworkPlugins/group/calico/ControllerPod 6.01
320 TestNetworkPlugins/group/calico/KubeletFlags 0.32
321 TestNetworkPlugins/group/calico/NetCatPod 10.27
322 TestNetworkPlugins/group/custom-flannel/DNS 0.25
323 TestNetworkPlugins/group/custom-flannel/Localhost 0.29
324 TestNetworkPlugins/group/custom-flannel/HairPin 0.29
325 TestNetworkPlugins/group/calico/DNS 0.28
326 TestNetworkPlugins/group/calico/Localhost 0.26
327 TestNetworkPlugins/group/calico/HairPin 0.25
328 TestNetworkPlugins/group/false/Start 59.27
329 TestNetworkPlugins/group/enable-default-cni/Start 47.53
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
332 TestNetworkPlugins/group/false/KubeletFlags 0.32
333 TestNetworkPlugins/group/false/NetCatPod 11.32
334 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
335 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
336 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
337 TestNetworkPlugins/group/false/DNS 0.22
338 TestNetworkPlugins/group/false/Localhost 0.17
339 TestNetworkPlugins/group/false/HairPin 0.17
340 TestNetworkPlugins/group/flannel/Start 67.66
341 TestNetworkPlugins/group/bridge/Start 82.39
342 TestNetworkPlugins/group/flannel/ControllerPod 6.01
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
344 TestNetworkPlugins/group/flannel/NetCatPod 10.3
345 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
346 TestNetworkPlugins/group/bridge/NetCatPod 10.38
347 TestNetworkPlugins/group/flannel/DNS 0.29
348 TestNetworkPlugins/group/flannel/Localhost 0.2
349 TestNetworkPlugins/group/flannel/HairPin 0.19
350 TestNetworkPlugins/group/bridge/DNS 0.3
351 TestNetworkPlugins/group/bridge/Localhost 0.28
352 TestNetworkPlugins/group/bridge/HairPin 0.2
353 TestNetworkPlugins/group/kubenet/Start 86.97
355 TestStartStop/group/old-k8s-version/serial/FirstStart 130.33
356 TestNetworkPlugins/group/kubenet/KubeletFlags 0.27
357 TestNetworkPlugins/group/kubenet/NetCatPod 11.29
358 TestNetworkPlugins/group/kubenet/DNS 0.2
359 TestNetworkPlugins/group/kubenet/Localhost 0.18
360 TestNetworkPlugins/group/kubenet/HairPin 0.16
362 TestStartStop/group/embed-certs/serial/FirstStart 76.48
363 TestStartStop/group/old-k8s-version/serial/DeployApp 9.6
364 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.29
365 TestStartStop/group/old-k8s-version/serial/Stop 10.95
366 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
367 TestStartStop/group/old-k8s-version/serial/SecondStart 121.13
368 TestStartStop/group/embed-certs/serial/DeployApp 8.4
369 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.42
370 TestStartStop/group/embed-certs/serial/Stop 11.21
371 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
372 TestStartStop/group/embed-certs/serial/SecondStart 267.43
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
376 TestStartStop/group/old-k8s-version/serial/Pause 2.87
378 TestStartStop/group/no-preload/serial/FirstStart 87.34
379 TestStartStop/group/no-preload/serial/DeployApp 9.35
380 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
381 TestStartStop/group/no-preload/serial/Stop 10.97
382 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
383 TestStartStop/group/no-preload/serial/SecondStart 266.7
384 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
386 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/embed-certs/serial/Pause 2.91
389 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.04
390 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
391 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
392 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.84
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
394 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.66
395 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
396 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
397 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
398 TestStartStop/group/no-preload/serial/Pause 3.13
400 TestStartStop/group/newest-cni/serial/FirstStart 39.71
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
403 TestStartStop/group/newest-cni/serial/Stop 11.03
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
405 TestStartStop/group/newest-cni/serial/SecondStart 18.37
406 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
409 TestStartStop/group/newest-cni/serial/Pause 3.07
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.86
x
+
TestDownloadOnly/v1.20.0/json-events (17.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-871822 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-871822 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (17.852170417s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-871822
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-871822: exit status 85 (64.494145ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-871822 | jenkins | v1.33.1 | 27 Aug 24 22:40 UTC |          |
	|         | -p download-only-871822        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 22:40:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 22:40:44.806336 1743254 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:40:44.806524 1743254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:40:44.806564 1743254 out.go:358] Setting ErrFile to fd 2...
	I0827 22:40:44.806587 1743254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:40:44.806866 1743254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
	W0827 22:40:44.807047 1743254 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19522-1737862/.minikube/config/config.json: open /home/jenkins/minikube-integration/19522-1737862/.minikube/config/config.json: no such file or directory
	I0827 22:40:44.807534 1743254 out.go:352] Setting JSON to true
	I0827 22:40:44.808788 1743254 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22993,"bootTime":1724775452,"procs":415,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0827 22:40:44.808892 1743254 start.go:139] virtualization:  
	I0827 22:40:44.813919 1743254 out.go:97] [download-only-871822] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0827 22:40:44.814098 1743254 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19522-1737862/.minikube/cache/preloaded-tarball: no such file or directory
	I0827 22:40:44.814215 1743254 notify.go:220] Checking for updates...
	I0827 22:40:44.818928 1743254 out.go:169] MINIKUBE_LOCATION=19522
	I0827 22:40:44.822185 1743254 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:40:44.825394 1743254 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig
	I0827 22:40:44.828507 1743254 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube
	I0827 22:40:44.830894 1743254 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0827 22:40:44.835896 1743254 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0827 22:40:44.836152 1743254 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:40:44.861690 1743254 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 22:40:44.861804 1743254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 22:40:44.922770 1743254 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-27 22:40:44.913063447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 22:40:44.922884 1743254 docker.go:307] overlay module found
	I0827 22:40:44.925142 1743254 out.go:97] Using the docker driver based on user configuration
	I0827 22:40:44.925171 1743254 start.go:297] selected driver: docker
	I0827 22:40:44.925178 1743254 start.go:901] validating driver "docker" against <nil>
	I0827 22:40:44.925295 1743254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 22:40:44.979455 1743254 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-27 22:40:44.970252701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 22:40:44.979618 1743254 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 22:40:44.979911 1743254 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0827 22:40:44.980073 1743254 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 22:40:44.983517 1743254 out.go:169] Using Docker driver with root privileges
	I0827 22:40:44.989492 1743254 cni.go:84] Creating CNI manager for ""
	I0827 22:40:44.989525 1743254 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0827 22:40:44.989617 1743254 start.go:340] cluster config:
	{Name:download-only-871822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-871822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:40:44.992776 1743254 out.go:97] Starting "download-only-871822" primary control-plane node in "download-only-871822" cluster
	I0827 22:40:44.992799 1743254 cache.go:121] Beginning downloading kic base image for docker with docker
	I0827 22:40:44.996239 1743254 out.go:97] Pulling base image v0.0.44-1724667927-19511 ...
	I0827 22:40:44.996264 1743254 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 22:40:44.996357 1743254 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local docker daemon
	I0827 22:40:45.033008 1743254 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 to local cache
	I0827 22:40:45.040080 1743254 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory
	I0827 22:40:45.040212 1743254 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 to local cache
	I0827 22:40:45.062426 1743254 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0827 22:40:45.062454 1743254 cache.go:56] Caching tarball of preloaded images
	I0827 22:40:45.063324 1743254 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 22:40:45.083306 1743254 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0827 22:40:45.083351 1743254 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 22:40:45.179990 1743254 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19522-1737862/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0827 22:40:49.497408 1743254 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 22:40:49.497539 1743254 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19522-1737862/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 22:40:50.553914 1743254 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0827 22:40:50.554282 1743254 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/download-only-871822/config.json ...
	I0827 22:40:50.554321 1743254 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/download-only-871822/config.json: {Name:mk62bf98c76f214c20b8674075b9efa71cb10ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:40:50.554512 1743254 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0827 22:40:50.555294 1743254 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19522-1737862/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-871822 host does not exist
	  To start a cluster, run: "minikube start -p download-only-871822"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-871822
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (5.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-770752 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-770752 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.181489575s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (5.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-770752
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-770752: exit status 85 (69.781399ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-871822 | jenkins | v1.33.1 | 27 Aug 24 22:40 UTC |                     |
	|         | -p download-only-871822        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC | 27 Aug 24 22:41 UTC |
	| delete  | -p download-only-871822        | download-only-871822 | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC | 27 Aug 24 22:41 UTC |
	| start   | -o=json --download-only        | download-only-770752 | jenkins | v1.33.1 | 27 Aug 24 22:41 UTC |                     |
	|         | -p download-only-770752        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 22:41:03
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 22:41:03.064072 1743457 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:41:03.064293 1743457 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:41:03.064321 1743457 out.go:358] Setting ErrFile to fd 2...
	I0827 22:41:03.064339 1743457 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:41:03.064663 1743457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
	I0827 22:41:03.065154 1743457 out.go:352] Setting JSON to true
	I0827 22:41:03.066406 1743457 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23011,"bootTime":1724775452,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0827 22:41:03.066517 1743457 start.go:139] virtualization:  
	I0827 22:41:03.068795 1743457 out.go:97] [download-only-770752] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0827 22:41:03.069069 1743457 notify.go:220] Checking for updates...
	I0827 22:41:03.071893 1743457 out.go:169] MINIKUBE_LOCATION=19522
	I0827 22:41:03.074714 1743457 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:41:03.076954 1743457 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig
	I0827 22:41:03.079089 1743457 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube
	I0827 22:41:03.081029 1743457 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0827 22:41:03.084890 1743457 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0827 22:41:03.085157 1743457 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:41:03.115543 1743457 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 22:41:03.115674 1743457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 22:41:03.169201 1743457 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-27 22:41:03.159646052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 22:41:03.169313 1743457 docker.go:307] overlay module found
	I0827 22:41:03.172759 1743457 out.go:97] Using the docker driver based on user configuration
	I0827 22:41:03.172792 1743457 start.go:297] selected driver: docker
	I0827 22:41:03.172800 1743457 start.go:901] validating driver "docker" against <nil>
	I0827 22:41:03.172932 1743457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 22:41:03.230063 1743457 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-27 22:41:03.219977578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 22:41:03.230227 1743457 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 22:41:03.230516 1743457 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0827 22:41:03.230686 1743457 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 22:41:03.232242 1743457 out.go:169] Using Docker driver with root privileges
	I0827 22:41:03.233264 1743457 cni.go:84] Creating CNI manager for ""
	I0827 22:41:03.233296 1743457 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0827 22:41:03.233310 1743457 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0827 22:41:03.233396 1743457 start.go:340] cluster config:
	{Name:download-only-770752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-770752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:41:03.235169 1743457 out.go:97] Starting "download-only-770752" primary control-plane node in "download-only-770752" cluster
	I0827 22:41:03.235203 1743457 cache.go:121] Beginning downloading kic base image for docker with docker
	I0827 22:41:03.237248 1743457 out.go:97] Pulling base image v0.0.44-1724667927-19511 ...
	I0827 22:41:03.237288 1743457 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 22:41:03.237389 1743457 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local docker daemon
	I0827 22:41:03.252553 1743457 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 to local cache
	I0827 22:41:03.252686 1743457 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory
	I0827 22:41:03.252714 1743457 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory, skipping pull
	I0827 22:41:03.252719 1743457 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 exists in cache, skipping pull
	I0827 22:41:03.252728 1743457 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 as a tarball
	I0827 22:41:03.297909 1743457 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 22:41:03.297936 1743457 cache.go:56] Caching tarball of preloaded images
	I0827 22:41:03.298651 1743457 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 22:41:03.299985 1743457 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0827 22:41:03.300003 1743457 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 22:41:03.390007 1743457 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /home/jenkins/minikube-integration/19522-1737862/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0827 22:41:06.671414 1743457 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 22:41:06.671569 1743457 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19522-1737862/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0827 22:41:07.433165 1743457 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0827 22:41:07.433544 1743457 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/download-only-770752/config.json ...
	I0827 22:41:07.433581 1743457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/download-only-770752/config.json: {Name:mk19988971621a70f3ae4b91963bb7440e55dc69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 22:41:07.434320 1743457 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0827 22:41:07.434499 1743457 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19522-1737862/.minikube/cache/linux/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-770752 host does not exist
	  To start a cluster, run: "minikube start -p download-only-770752"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-770752
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-761535 --alsologtostderr --binary-mirror http://127.0.0.1:41893 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-761535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-761535
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (86.06s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-463124 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-463124 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m23.781263765s)
helpers_test.go:175: Cleaning up "offline-docker-463124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-463124
E0827 23:33:45.530076 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-463124: (2.276605216s)
--- PASS: TestOffline (86.06s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-958846
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-958846: exit status 85 (72.687699ms)

                                                
                                                
-- stdout --
	* Profile "addons-958846" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-958846"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-958846
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-958846: exit status 85 (72.368731ms)

                                                
                                                
-- stdout --
	* Profile "addons-958846" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-958846"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (222.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-958846 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-958846 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m42.157860924s)
--- PASS: TestAddons/Setup (222.16s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 93.814464ms
addons_test.go:905: volcano-admission stabilized in 94.034898ms
addons_test.go:897: volcano-scheduler stabilized in 94.074216ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-2kzg6" [99284f3e-d513-489a-b5bc-0ee6ee15734b] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003270307s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-kcx2j" [ea520a5b-bc07-40d8-a319-f7b6199d6c2a] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00393944s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-cwwvv" [b0cb672b-970c-4770-b14e-e891b4292875] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003117928s
addons_test.go:932: (dbg) Run:  kubectl --context addons-958846 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-958846 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-958846 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f05aba2f-ae86-42e0-a964-8adf817dcee8] Pending
helpers_test.go:344: "test-job-nginx-0" [f05aba2f-ae86-42e0-a964-8adf817dcee8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [f05aba2f-ae86-42e0-a964-8adf817dcee8] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004386819s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-958846 addons disable volcano --alsologtostderr -v=1: (10.540009317s)
--- PASS: TestAddons/serial/Volcano (40.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-958846 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-958846 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-958846 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-958846 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-958846 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d94b4a18-cde3-4964-a7e1-394d7f4fbfa2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d94b4a18-cde3-4964-a7e1-394d7f4fbfa2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003986179s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-958846 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-958846 addons disable ingress-dns --alsologtostderr -v=1: (1.829882532s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-958846 addons disable ingress --alsologtostderr -v=1: (7.749082522s)
--- PASS: TestAddons/parallel/Ingress (20.25s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.96s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wbmrh" [9a467223-368d-4af7-8d18-c8fba668423a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005123869s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-958846
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-958846: (5.954105771s)
--- PASS: TestAddons/parallel/InspektorGadget (10.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.244046ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-hh9mg" [1fac511e-cb11-4c1d-9bf8-70c4b3e623ec] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004511718s
addons_test.go:417: (dbg) Run:  kubectl --context addons-958846 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.23579ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-958846 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-958846 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cdcd15ed-74b1-4a8e-b9da-69872c8f849e] Pending
helpers_test.go:344: "task-pv-pod" [cdcd15ed-74b1-4a8e-b9da-69872c8f849e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cdcd15ed-74b1-4a8e-b9da-69872c8f849e] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003427606s
addons_test.go:590: (dbg) Run:  kubectl --context addons-958846 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-958846 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-958846 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-958846 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-958846 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-958846 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-958846 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b5057629-be75-422b-8f35-15a1459da10d] Pending
helpers_test.go:344: "task-pv-pod-restore" [b5057629-be75-422b-8f35-15a1459da10d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b5057629-be75-422b-8f35-15a1459da10d] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003948939s
addons_test.go:632: (dbg) Run:  kubectl --context addons-958846 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-958846 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-958846 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-958846 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.687748953s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (37.50s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-958846 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-958846 --alsologtostderr -v=1: (1.629328009s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-g4pbr" [fd6580bf-d139-48b0-87a3-a80cd3b1050c] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-g4pbr" [fd6580bf-d139-48b0-87a3-a80cd3b1050c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-g4pbr" [fd6580bf-d139-48b0-87a3-a80cd3b1050c] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00395496s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-958846 addons disable headlamp --alsologtostderr -v=1: (5.870341159s)
--- PASS: TestAddons/parallel/Headlamp (17.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-fp2lq" [da1c12e8-1196-41d2-bd28-7a786e714a8d] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003795402s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-958846
2024/08/27 22:54:47 [DEBUG] GET http://192.168.49.2:5000
--- PASS: TestAddons/parallel/CloudSpanner (6.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-958846 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-958846 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-958846 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1efff684-b9fe-40af-b8f3-cec9b7c8d541] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1efff684-b9fe-40af-b8f3-cec9b7c8d541] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1efff684-b9fe-40af-b8f3-cec9b7c8d541] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003478066s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-958846 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 ssh "cat /opt/local-path-provisioner/pvc-aad4af14-49f8-4159-b469-887f08026e79_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-958846 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-958846 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ggb2x" [a1d78076-3b4f-478e-b23f-c467a85cbf00] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004644876s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-958846
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-mqv4v" [0622a729-48ee-43b5-8d84-9a45486b48cc] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003662183s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-958846 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-958846 addons disable yakd --alsologtostderr -v=1: (5.863684236s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-958846
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-958846: (10.914698201s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-958846
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-958846
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-958846
--- PASS: TestAddons/StoppedEnableDisable (11.20s)

                                                
                                    
x
+
TestCertOptions (43.34s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-649982 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-649982 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (40.374556037s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-649982 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-649982 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-649982 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-649982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-649982
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-649982: (2.254758997s)
--- PASS: TestCertOptions (43.34s)

                                                
                                    
x
+
TestCertExpiration (250.78s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-786046 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-786046 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (44.334342738s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-786046 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-786046 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.223151775s)
helpers_test.go:175: Cleaning up "cert-expiration-786046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-786046
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-786046: (2.226516139s)
--- PASS: TestCertExpiration (250.78s)

                                                
                                    
x
+
TestDockerFlags (47.07s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-746170 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-746170 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.672118398s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-746170 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-746170 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-746170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-746170
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-746170: (2.69408646s)
--- PASS: TestDockerFlags (47.07s)

                                                
                                    
x
+
TestForceSystemdFlag (45.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-121246 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-121246 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.616412529s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-121246 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-121246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-121246
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-121246: (2.631007185s)
--- PASS: TestForceSystemdFlag (45.90s)

                                                
                                    
x
+
TestForceSystemdEnv (45.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-623360 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-623360 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.052230591s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-623360 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-623360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-623360
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-623360: (3.97404244s)
--- PASS: TestForceSystemdEnv (45.46s)

                                                
                                    
x
+
TestErrorSpam/setup (32.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-893683 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-893683 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-893683 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-893683 --driver=docker  --container-runtime=docker: (32.249434355s)
--- PASS: TestErrorSpam/setup (32.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (2.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 stop: (1.859393634s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-893683 --log_dir /tmp/nospam-893683 stop
--- PASS: TestErrorSpam/stop (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19522-1737862/.minikube/files/etc/test/nested/copy/1743249/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300627 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-300627 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (45.93559385s)
--- PASS: TestFunctional/serial/StartWithProxy (45.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300627 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-300627 --alsologtostderr -v=8: (35.829046645s)
functional_test.go:663: soft start took 35.833210146s for "functional-300627" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-300627 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-300627 cache add registry.k8s.io/pause:3.1: (1.216248967s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-300627 cache add registry.k8s.io/pause:3.3: (1.309879584s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-300627 /tmp/TestFunctionalserialCacheCmdcacheadd_local674545671/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 cache add minikube-local-cache-test:functional-300627
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 cache delete minikube-local-cache-test:functional-300627
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-300627
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300627 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (306.893917ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 kubectl -- --context functional-300627 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-300627 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300627 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-300627 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.246039778s)
functional_test.go:761: restart took 41.246158709s for "functional-300627" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-300627 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-300627 logs: (1.255024189s)
--- PASS: TestFunctional/serial/LogsCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 logs --file /tmp/TestFunctionalserialLogsFileCmd806468974/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-300627 logs --file /tmp/TestFunctionalserialLogsFileCmd806468974/001/logs.txt: (1.265548768s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-300627 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-300627
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-300627: exit status 115 (595.839445ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32046 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-300627 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-300627 delete -f testdata/invalidsvc.yaml: (1.2786657s)
--- PASS: TestFunctional/serial/InvalidService (5.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300627 config get cpus: exit status 14 (66.639901ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300627 config get cpus: exit status 14 (86.881671ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-300627 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-300627 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1786594: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300627 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-300627 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (187.740684ms)

                                                
                                                
-- stdout --
	* [functional-300627] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:59:19.822339 1786301 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:59:19.822531 1786301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:59:19.822558 1786301 out.go:358] Setting ErrFile to fd 2...
	I0827 22:59:19.822577 1786301 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:59:19.822865 1786301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
	I0827 22:59:19.823279 1786301 out.go:352] Setting JSON to false
	I0827 22:59:19.824392 1786301 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24108,"bootTime":1724775452,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0827 22:59:19.824514 1786301 start.go:139] virtualization:  
	I0827 22:59:19.828728 1786301 out.go:177] * [functional-300627] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0827 22:59:19.831258 1786301 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 22:59:19.831306 1786301 notify.go:220] Checking for updates...
	I0827 22:59:19.835328 1786301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:59:19.836883 1786301 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig
	I0827 22:59:19.838355 1786301 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube
	I0827 22:59:19.839853 1786301 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0827 22:59:19.841590 1786301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 22:59:19.843530 1786301 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 22:59:19.844059 1786301 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:59:19.878635 1786301 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 22:59:19.878749 1786301 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 22:59:19.937290 1786301 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-27 22:59:19.92613843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 22:59:19.937403 1786301 docker.go:307] overlay module found
	I0827 22:59:19.939339 1786301 out.go:177] * Using the docker driver based on existing profile
	I0827 22:59:19.941234 1786301 start.go:297] selected driver: docker
	I0827 22:59:19.941253 1786301 start.go:901] validating driver "docker" against &{Name:functional-300627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-300627 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:59:19.941382 1786301 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 22:59:19.944924 1786301 out.go:201] 
	W0827 22:59:19.946894 1786301 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0827 22:59:19.948719 1786301 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300627 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300627 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-300627 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (179.447726ms)

                                                
                                                
-- stdout --
	* [functional-300627] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 22:59:20.254782 1786427 out.go:345] Setting OutFile to fd 1 ...
	I0827 22:59:20.254921 1786427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:59:20.254931 1786427 out.go:358] Setting ErrFile to fd 2...
	I0827 22:59:20.254936 1786427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 22:59:20.255283 1786427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
	I0827 22:59:20.255675 1786427 out.go:352] Setting JSON to false
	I0827 22:59:20.256765 1786427 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24109,"bootTime":1724775452,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0827 22:59:20.256844 1786427 start.go:139] virtualization:  
	I0827 22:59:20.259738 1786427 out.go:177] * [functional-300627] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0827 22:59:20.262376 1786427 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 22:59:20.262502 1786427 notify.go:220] Checking for updates...
	I0827 22:59:20.267190 1786427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 22:59:20.269627 1786427 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig
	I0827 22:59:20.272114 1786427 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube
	I0827 22:59:20.274614 1786427 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0827 22:59:20.277284 1786427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 22:59:20.280454 1786427 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 22:59:20.281175 1786427 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 22:59:20.307214 1786427 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 22:59:20.307337 1786427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 22:59:20.366624 1786427 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-27 22:59:20.356889394 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 22:59:20.366744 1786427 docker.go:307] overlay module found
	I0827 22:59:20.369457 1786427 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0827 22:59:20.371947 1786427 start.go:297] selected driver: docker
	I0827 22:59:20.371966 1786427 start.go:901] validating driver "docker" against &{Name:functional-300627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-300627 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 22:59:20.372085 1786427 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 22:59:20.375260 1786427 out.go:201] 
	W0827 22:59:20.377756 1786427 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0827 22:59:20.380400 1786427 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-300627 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-300627 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-vbw4w" [dcb03730-d885-408c-9b35-cb36d5e7d2ae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-vbw4w" [dcb03730-d885-408c-9b35-cb36d5e7d2ae] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.004576445s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32385
functional_test.go:1675: http://192.168.49.2:32385: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-vbw4w

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32385
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [abaf4404-377d-45b5-a60a-5ad6043e20ff] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003611317s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-300627 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-300627 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-300627 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-300627 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c0cc3212-fd15-48cf-8d8b-0439ef4c4e69] Pending
helpers_test.go:344: "sp-pod" [c0cc3212-fd15-48cf-8d8b-0439ef4c4e69] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c0cc3212-fd15-48cf-8d8b-0439ef4c4e69] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004993291s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-300627 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-300627 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-300627 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [307eac50-cdd2-47d7-8cc9-57a72c70eec0] Pending
helpers_test.go:344: "sp-pod" [307eac50-cdd2-47d7-8cc9-57a72c70eec0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [307eac50-cdd2-47d7-8cc9-57a72c70eec0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003859522s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-300627 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh -n functional-300627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 cp functional-300627:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2848684882/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh -n functional-300627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh -n functional-300627 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1743249/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "sudo cat /etc/test/nested/copy/1743249/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1743249.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "sudo cat /etc/ssl/certs/1743249.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1743249.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "sudo cat /usr/share/ca-certificates/1743249.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/17432492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "sudo cat /etc/ssl/certs/17432492.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/17432492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "sudo cat /usr/share/ca-certificates/17432492.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-300627 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300627 ssh "sudo systemctl is-active crio": exit status 1 (407.986999ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-300627 version -o=json --components: (1.180478126s)
--- PASS: TestFunctional/parallel/Version/components (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300627 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-300627
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-300627
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300627 image ls --format short --alsologtostderr:
I0827 22:59:27.125595 1787126 out.go:345] Setting OutFile to fd 1 ...
I0827 22:59:27.125717 1787126 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:59:27.125742 1787126 out.go:358] Setting ErrFile to fd 2...
I0827 22:59:27.125747 1787126 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:59:27.126020 1787126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
I0827 22:59:27.126643 1787126 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 22:59:27.126759 1787126 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 22:59:27.127235 1787126 cli_runner.go:164] Run: docker container inspect functional-300627 --format={{.State.Status}}
I0827 22:59:27.145281 1787126 ssh_runner.go:195] Run: systemctl --version
I0827 22:59:27.145335 1787126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300627
I0827 22:59:27.162685 1787126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/functional-300627/id_rsa Username:docker}
I0827 22:59:27.265326 1787126 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300627 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| localhost/my-image                          | functional-300627 | 62809d4492836 | 1.41MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/minikube-local-cache-test | functional-300627 | 9834b650f2e29 | 30B    |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| docker.io/kicbase/echo-server               | functional-300627 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300627 image ls --format table --alsologtostderr:
I0827 22:59:31.261687 1787510 out.go:345] Setting OutFile to fd 1 ...
I0827 22:59:31.261918 1787510 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:59:31.261944 1787510 out.go:358] Setting ErrFile to fd 2...
I0827 22:59:31.261962 1787510 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:59:31.262240 1787510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
I0827 22:59:31.262938 1787510 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 22:59:31.263120 1787510 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 22:59:31.263657 1787510 cli_runner.go:164] Run: docker container inspect functional-300627 --format={{.State.Status}}
I0827 22:59:31.281428 1787510 ssh_runner.go:195] Run: systemctl --version
I0827 22:59:31.281482 1787510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300627
I0827 22:59:31.300850 1787510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/functional-300627/id_rsa Username:docker}
I0827 22:59:31.401324 1787510 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/08/27 22:59:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300627 image ls --format json --alsologtostderr:
[{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-300627"],"size":"4780000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"9834b650f2e29ba5e445eaafb0a07cc51e325050c00b8c54eefc12f4cc059117","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-300627"],"size":"30"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"27e3830e1402783674d8b5940
38967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/n
ginx:latest"],"size":"193000000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"62809d44928362d4f8d4c90d0a36eca9cd8b60b18c12440eb9e106238b7f5b65","repoDigests":[],"repoTags":["localhost/my-image:functional-300627"],"size"
:"1410000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300627 image ls --format json --alsologtostderr:
I0827 22:59:30.995270 1787477 out.go:345] Setting OutFile to fd 1 ...
I0827 22:59:30.995523 1787477 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:59:30.995535 1787477 out.go:358] Setting ErrFile to fd 2...
I0827 22:59:30.995568 1787477 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:59:30.995947 1787477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
I0827 22:59:31.001755 1787477 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 22:59:31.001969 1787477 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 22:59:31.002545 1787477 cli_runner.go:164] Run: docker container inspect functional-300627 --format={{.State.Status}}
I0827 22:59:31.034359 1787477 ssh_runner.go:195] Run: systemctl --version
I0827 22:59:31.034416 1787477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300627
I0827 22:59:31.061674 1787477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/functional-300627/id_rsa Username:docker}
I0827 22:59:31.165574 1787477 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300627 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 9834b650f2e29ba5e445eaafb0a07cc51e325050c00b8c54eefc12f4cc059117
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-300627
size: "30"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-300627
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300627 image ls --format yaml --alsologtostderr:
I0827 22:59:27.352723 1787158 out.go:345] Setting OutFile to fd 1 ...
I0827 22:59:27.352854 1787158 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:59:27.352858 1787158 out.go:358] Setting ErrFile to fd 2...
I0827 22:59:27.352863 1787158 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:59:27.353094 1787158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
I0827 22:59:27.353785 1787158 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 22:59:27.353904 1787158 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 22:59:27.354364 1787158 cli_runner.go:164] Run: docker container inspect functional-300627 --format={{.State.Status}}
I0827 22:59:27.378225 1787158 ssh_runner.go:195] Run: systemctl --version
I0827 22:59:27.378279 1787158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300627
I0827 22:59:27.398737 1787158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/functional-300627/id_rsa Username:docker}
I0827 22:59:27.497168 1787158 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300627 ssh pgrep buildkitd: exit status 1 (306.85848ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image build -t localhost/my-image:functional-300627 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-300627 image build -t localhost/my-image:functional-300627 testdata/build --alsologtostderr: (2.843519732s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300627 image build -t localhost/my-image:functional-300627 testdata/build --alsologtostderr:
I0827 22:59:27.920594 1787248 out.go:345] Setting OutFile to fd 1 ...
I0827 22:59:27.922108 1787248 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:59:27.922125 1787248 out.go:358] Setting ErrFile to fd 2...
I0827 22:59:27.922131 1787248 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 22:59:27.922409 1787248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
I0827 22:59:27.923074 1787248 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 22:59:27.926404 1787248 config.go:182] Loaded profile config "functional-300627": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0827 22:59:27.926978 1787248 cli_runner.go:164] Run: docker container inspect functional-300627 --format={{.State.Status}}
I0827 22:59:27.949287 1787248 ssh_runner.go:195] Run: systemctl --version
I0827 22:59:27.949343 1787248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300627
I0827 22:59:27.975656 1787248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/functional-300627/id_rsa Username:docker}
I0827 22:59:28.085646 1787248 build_images.go:161] Building image from path: /tmp/build.4255754968.tar
I0827 22:59:28.085725 1787248 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0827 22:59:28.098400 1787248 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4255754968.tar
I0827 22:59:28.103297 1787248 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4255754968.tar: stat -c "%s %y" /var/lib/minikube/build/build.4255754968.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4255754968.tar': No such file or directory
I0827 22:59:28.103365 1787248 ssh_runner.go:362] scp /tmp/build.4255754968.tar --> /var/lib/minikube/build/build.4255754968.tar (3072 bytes)
I0827 22:59:28.133879 1787248 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4255754968
I0827 22:59:28.147903 1787248 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4255754968 -xf /var/lib/minikube/build/build.4255754968.tar
I0827 22:59:28.171838 1787248 docker.go:360] Building image: /var/lib/minikube/build/build.4255754968
I0827 22:59:28.171907 1787248 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-300627 /var/lib/minikube/build/build.4255754968
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:62809d44928362d4f8d4c90d0a36eca9cd8b60b18c12440eb9e106238b7f5b65 done
#8 naming to localhost/my-image:functional-300627 done
#8 DONE 0.0s
I0827 22:59:30.646436 1787248 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-300627 /var/lib/minikube/build/build.4255754968: (2.4745007s)
I0827 22:59:30.646531 1787248 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4255754968
I0827 22:59:30.659757 1787248 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4255754968.tar
I0827 22:59:30.674203 1787248 build_images.go:217] Built localhost/my-image:functional-300627 from /tmp/build.4255754968.tar
I0827 22:59:30.674236 1787248 build_images.go:133] succeeded building to: functional-300627
I0827 22:59:30.674242 1787248 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-300627
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image load --daemon kicbase/echo-server:functional-300627 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-300627 docker-env) && out/minikube-linux-arm64 status -p functional-300627"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-300627 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image load --daemon kicbase/echo-server:functional-300627 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-300627
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image load --daemon kicbase/echo-server:functional-300627 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image save kicbase/echo-server:functional-300627 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-300627 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-300627 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-j2s4q" [64f159cc-238c-46bf-8144-0c35a4e2d800] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-j2s4q" [64f159cc-238c-46bf-8144-0c35a4e2d800] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003998359s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image rm kicbase/echo-server:functional-300627 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-300627
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 image save --daemon kicbase/echo-server:functional-300627 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-300627
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-300627 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-300627 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-300627 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1782753: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-300627 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-300627 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-300627 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7cf5b925-1751-4ea1-8824-4d19f63a6dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7cf5b925-1751-4ea1-8824-4d19f63a6dfc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.002983202s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 service list -o json
functional_test.go:1494: Took "359.223832ms" to run "out/minikube-linux-arm64 -p functional-300627 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32445
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32445
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-300627 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.121.226 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-300627 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "454.889288ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "75.385296ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "429.299692ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "71.745149ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300627 /tmp/TestFunctionalparallelMountCmdany-port3618525874/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724799548850082313" to /tmp/TestFunctionalparallelMountCmdany-port3618525874/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724799548850082313" to /tmp/TestFunctionalparallelMountCmdany-port3618525874/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724799548850082313" to /tmp/TestFunctionalparallelMountCmdany-port3618525874/001/test-1724799548850082313
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300627 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (570.221595ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 27 22:59 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 27 22:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 27 22:59 test-1724799548850082313
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh cat /mount-9p/test-1724799548850082313
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-300627 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cb5566e7-5402-4189-b40b-d098233782ea] Pending
helpers_test.go:344: "busybox-mount" [cb5566e7-5402-4189-b40b-d098233782ea] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cb5566e7-5402-4189-b40b-d098233782ea] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cb5566e7-5402-4189-b40b-d098233782ea] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.021644292s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-300627 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300627 /tmp/TestFunctionalparallelMountCmdany-port3618525874/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300627 /tmp/TestFunctionalparallelMountCmdspecific-port1309702264/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300627 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (467.063505ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300627 /tmp/TestFunctionalparallelMountCmdspecific-port1309702264/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300627 ssh "sudo umount -f /mount-9p": exit status 1 (270.810879ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-300627 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300627 /tmp/TestFunctionalparallelMountCmdspecific-port1309702264/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2207634619/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2207634619/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2207634619/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-300627 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-300627 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2207634619/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2207634619/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2207634619/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-300627
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-300627
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-300627
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (130.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-640807 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0827 22:59:52.231381 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:59:52.238781 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:59:52.250159 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:59:52.271818 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:59:52.313193 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:59:52.394866 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:59:52.556376 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:59:52.878143 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:59:53.519462 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:59:54.801556 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 22:59:57.363758 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:00:02.485666 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:00:12.727438 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:00:33.209243 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:01:14.170881 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-640807 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m9.790698189s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (130.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (43.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-640807 -- rollout status deployment/busybox: (4.64445476s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-4swk5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-94pds -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-s8mw5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-4swk5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-94pds -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-s8mw5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-4swk5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-94pds -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-s8mw5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (43.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-4swk5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-4swk5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-94pds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-94pds -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-s8mw5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-640807 -- exec busybox-7dff88458-s8mw5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-640807 -v=7 --alsologtostderr
E0827 23:02:36.092658 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-640807 -v=7 --alsologtostderr: (27.233050175s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr: (1.208660083s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-640807 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-640807 status --output json -v=7 --alsologtostderr: (1.04766036s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp testdata/cp-test.txt ha-640807:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1230753198/001/cp-test_ha-640807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807:/home/docker/cp-test.txt ha-640807-m02:/home/docker/cp-test_ha-640807_ha-640807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m02 "sudo cat /home/docker/cp-test_ha-640807_ha-640807-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807:/home/docker/cp-test.txt ha-640807-m03:/home/docker/cp-test_ha-640807_ha-640807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m03 "sudo cat /home/docker/cp-test_ha-640807_ha-640807-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807:/home/docker/cp-test.txt ha-640807-m04:/home/docker/cp-test_ha-640807_ha-640807-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m04 "sudo cat /home/docker/cp-test_ha-640807_ha-640807-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp testdata/cp-test.txt ha-640807-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1230753198/001/cp-test_ha-640807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m02:/home/docker/cp-test.txt ha-640807:/home/docker/cp-test_ha-640807-m02_ha-640807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807 "sudo cat /home/docker/cp-test_ha-640807-m02_ha-640807.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m02:/home/docker/cp-test.txt ha-640807-m03:/home/docker/cp-test_ha-640807-m02_ha-640807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m03 "sudo cat /home/docker/cp-test_ha-640807-m02_ha-640807-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m02:/home/docker/cp-test.txt ha-640807-m04:/home/docker/cp-test_ha-640807-m02_ha-640807-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m04 "sudo cat /home/docker/cp-test_ha-640807-m02_ha-640807-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp testdata/cp-test.txt ha-640807-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1230753198/001/cp-test_ha-640807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m03:/home/docker/cp-test.txt ha-640807:/home/docker/cp-test_ha-640807-m03_ha-640807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807 "sudo cat /home/docker/cp-test_ha-640807-m03_ha-640807.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m03:/home/docker/cp-test.txt ha-640807-m02:/home/docker/cp-test_ha-640807-m03_ha-640807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m02 "sudo cat /home/docker/cp-test_ha-640807-m03_ha-640807-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m03:/home/docker/cp-test.txt ha-640807-m04:/home/docker/cp-test_ha-640807-m03_ha-640807-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m04 "sudo cat /home/docker/cp-test_ha-640807-m03_ha-640807-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp testdata/cp-test.txt ha-640807-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1230753198/001/cp-test_ha-640807-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m04:/home/docker/cp-test.txt ha-640807:/home/docker/cp-test_ha-640807-m04_ha-640807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807 "sudo cat /home/docker/cp-test_ha-640807-m04_ha-640807.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m04:/home/docker/cp-test.txt ha-640807-m02:/home/docker/cp-test_ha-640807-m04_ha-640807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m02 "sudo cat /home/docker/cp-test_ha-640807-m04_ha-640807-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 cp ha-640807-m04:/home/docker/cp-test.txt ha-640807-m03:/home/docker/cp-test_ha-640807-m04_ha-640807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 ssh -n ha-640807-m03 "sudo cat /home/docker/cp-test_ha-640807-m04_ha-640807-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-640807 node stop m02 -v=7 --alsologtostderr: (11.04764104s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr: exit status 7 (760.654571ms)

                                                
                                                
-- stdout --
	ha-640807
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-640807-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-640807-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-640807-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:03:32.234737 1810076 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:03:32.234885 1810076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:03:32.234896 1810076 out.go:358] Setting ErrFile to fd 2...
	I0827 23:03:32.234902 1810076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:03:32.235167 1810076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
	I0827 23:03:32.235398 1810076 out.go:352] Setting JSON to false
	I0827 23:03:32.235443 1810076 mustload.go:65] Loading cluster: ha-640807
	I0827 23:03:32.235556 1810076 notify.go:220] Checking for updates...
	I0827 23:03:32.235892 1810076 config.go:182] Loaded profile config "ha-640807": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 23:03:32.235909 1810076 status.go:255] checking status of ha-640807 ...
	I0827 23:03:32.236415 1810076 cli_runner.go:164] Run: docker container inspect ha-640807 --format={{.State.Status}}
	I0827 23:03:32.255052 1810076 status.go:330] ha-640807 host status = "Running" (err=<nil>)
	I0827 23:03:32.255078 1810076 host.go:66] Checking if "ha-640807" exists ...
	I0827 23:03:32.255400 1810076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-640807
	I0827 23:03:32.291745 1810076 host.go:66] Checking if "ha-640807" exists ...
	I0827 23:03:32.292125 1810076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:03:32.292233 1810076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-640807
	I0827 23:03:32.310265 1810076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/ha-640807/id_rsa Username:docker}
	I0827 23:03:32.409586 1810076 ssh_runner.go:195] Run: systemctl --version
	I0827 23:03:32.413901 1810076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:03:32.425908 1810076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:03:32.493000 1810076 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-27 23:03:32.474958115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:03:32.493794 1810076 kubeconfig.go:125] found "ha-640807" server: "https://192.168.49.254:8443"
	I0827 23:03:32.493832 1810076 api_server.go:166] Checking apiserver status ...
	I0827 23:03:32.493880 1810076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:03:32.506441 1810076 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2335/cgroup
	I0827 23:03:32.516799 1810076 api_server.go:182] apiserver freezer: "9:freezer:/docker/1ffb1679e91a5a06c4bac0cc3e4b079d716252e44086d722eb5ca57ca047e294/kubepods/burstable/pod198e1ad7596b902e81521a8f763a5875/68b3e05977aa7ff7f1010239d9692b1df5a43d42f7620e57531be7a1c4811ab7"
	I0827 23:03:32.516883 1810076 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1ffb1679e91a5a06c4bac0cc3e4b079d716252e44086d722eb5ca57ca047e294/kubepods/burstable/pod198e1ad7596b902e81521a8f763a5875/68b3e05977aa7ff7f1010239d9692b1df5a43d42f7620e57531be7a1c4811ab7/freezer.state
	I0827 23:03:32.526287 1810076 api_server.go:204] freezer state: "THAWED"
	I0827 23:03:32.526315 1810076 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0827 23:03:32.534167 1810076 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0827 23:03:32.534200 1810076 status.go:422] ha-640807 apiserver status = Running (err=<nil>)
	I0827 23:03:32.534212 1810076 status.go:257] ha-640807 status: &{Name:ha-640807 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:03:32.534229 1810076 status.go:255] checking status of ha-640807-m02 ...
	I0827 23:03:32.534642 1810076 cli_runner.go:164] Run: docker container inspect ha-640807-m02 --format={{.State.Status}}
	I0827 23:03:32.558482 1810076 status.go:330] ha-640807-m02 host status = "Stopped" (err=<nil>)
	I0827 23:03:32.558508 1810076 status.go:343] host is not running, skipping remaining checks
	I0827 23:03:32.558533 1810076 status.go:257] ha-640807-m02 status: &{Name:ha-640807-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:03:32.558553 1810076 status.go:255] checking status of ha-640807-m03 ...
	I0827 23:03:32.558859 1810076 cli_runner.go:164] Run: docker container inspect ha-640807-m03 --format={{.State.Status}}
	I0827 23:03:32.576835 1810076 status.go:330] ha-640807-m03 host status = "Running" (err=<nil>)
	I0827 23:03:32.576863 1810076 host.go:66] Checking if "ha-640807-m03" exists ...
	I0827 23:03:32.577181 1810076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-640807-m03
	I0827 23:03:32.595101 1810076 host.go:66] Checking if "ha-640807-m03" exists ...
	I0827 23:03:32.595419 1810076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:03:32.595472 1810076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-640807-m03
	I0827 23:03:32.612240 1810076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/ha-640807-m03/id_rsa Username:docker}
	I0827 23:03:32.709853 1810076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:03:32.722933 1810076 kubeconfig.go:125] found "ha-640807" server: "https://192.168.49.254:8443"
	I0827 23:03:32.722966 1810076 api_server.go:166] Checking apiserver status ...
	I0827 23:03:32.723017 1810076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:03:32.736422 1810076 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2167/cgroup
	I0827 23:03:32.747693 1810076 api_server.go:182] apiserver freezer: "9:freezer:/docker/47fc9c04146efc0244716076a288912a5dc87b0c68db732b65e75f542412be13/kubepods/burstable/pod678007791fb0af25c584c7bb3f073137/93e822e3bc69d2e84581f930542346afeb4383dd4b610b78a5f2e1336cae4180"
	I0827 23:03:32.747820 1810076 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/47fc9c04146efc0244716076a288912a5dc87b0c68db732b65e75f542412be13/kubepods/burstable/pod678007791fb0af25c584c7bb3f073137/93e822e3bc69d2e84581f930542346afeb4383dd4b610b78a5f2e1336cae4180/freezer.state
	I0827 23:03:32.758354 1810076 api_server.go:204] freezer state: "THAWED"
	I0827 23:03:32.758424 1810076 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0827 23:03:32.766481 1810076 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0827 23:03:32.766511 1810076 status.go:422] ha-640807-m03 apiserver status = Running (err=<nil>)
	I0827 23:03:32.766520 1810076 status.go:257] ha-640807-m03 status: &{Name:ha-640807-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:03:32.766554 1810076 status.go:255] checking status of ha-640807-m04 ...
	I0827 23:03:32.766897 1810076 cli_runner.go:164] Run: docker container inspect ha-640807-m04 --format={{.State.Status}}
	I0827 23:03:32.788479 1810076 status.go:330] ha-640807-m04 host status = "Running" (err=<nil>)
	I0827 23:03:32.788504 1810076 host.go:66] Checking if "ha-640807-m04" exists ...
	I0827 23:03:32.788828 1810076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-640807-m04
	I0827 23:03:32.806307 1810076 host.go:66] Checking if "ha-640807-m04" exists ...
	I0827 23:03:32.806621 1810076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:03:32.806666 1810076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-640807-m04
	I0827 23:03:32.825172 1810076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/ha-640807-m04/id_rsa Username:docker}
	I0827 23:03:32.925829 1810076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:03:32.937827 1810076 status.go:257] ha-640807-m04 status: &{Name:ha-640807-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (77.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 node start m02 -v=7 --alsologtostderr
E0827 23:03:45.530575 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:03:45.537085 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:03:45.548431 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:03:45.569904 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:03:45.611275 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:03:45.692703 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:03:45.854258 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:03:46.175793 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:03:46.817717 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:03:48.099168 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:03:50.660991 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:03:55.783061 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:04:06.025248 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:04:26.507086 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-640807 node start m02 -v=7 --alsologtostderr: (1m16.082847961s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr: (1.082961335s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (77.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (265.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-640807 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-640807 -v=7 --alsologtostderr
E0827 23:04:52.233554 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:05:07.469364 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:05:19.934189 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-640807 -v=7 --alsologtostderr: (33.889171008s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-640807 --wait=true -v=7 --alsologtostderr
E0827 23:06:29.390683 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:08:45.530224 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:09:13.232012 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-640807 --wait=true -v=7 --alsologtostderr: (3m51.864025651s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-640807
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (265.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-640807 node delete m03 -v=7 --alsologtostderr: (10.383790988s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 stop -v=7 --alsologtostderr
E0827 23:09:52.231062 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-640807 stop -v=7 --alsologtostderr: (33.047711694s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr: exit status 7 (107.950934ms)

                                                
                                                
-- stdout --
	ha-640807
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-640807-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-640807-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:10:02.548171 1838578 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:10:02.548327 1838578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:10:02.548353 1838578 out.go:358] Setting ErrFile to fd 2...
	I0827 23:10:02.548370 1838578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:10:02.548715 1838578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
	I0827 23:10:02.548919 1838578 out.go:352] Setting JSON to false
	I0827 23:10:02.548963 1838578 mustload.go:65] Loading cluster: ha-640807
	I0827 23:10:02.549050 1838578 notify.go:220] Checking for updates...
	I0827 23:10:02.549404 1838578 config.go:182] Loaded profile config "ha-640807": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 23:10:02.549419 1838578 status.go:255] checking status of ha-640807 ...
	I0827 23:10:02.549888 1838578 cli_runner.go:164] Run: docker container inspect ha-640807 --format={{.State.Status}}
	I0827 23:10:02.568928 1838578 status.go:330] ha-640807 host status = "Stopped" (err=<nil>)
	I0827 23:10:02.568951 1838578 status.go:343] host is not running, skipping remaining checks
	I0827 23:10:02.568959 1838578 status.go:257] ha-640807 status: &{Name:ha-640807 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:10:02.568983 1838578 status.go:255] checking status of ha-640807-m02 ...
	I0827 23:10:02.569370 1838578 cli_runner.go:164] Run: docker container inspect ha-640807-m02 --format={{.State.Status}}
	I0827 23:10:02.593740 1838578 status.go:330] ha-640807-m02 host status = "Stopped" (err=<nil>)
	I0827 23:10:02.593766 1838578 status.go:343] host is not running, skipping remaining checks
	I0827 23:10:02.593774 1838578 status.go:257] ha-640807-m02 status: &{Name:ha-640807-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:10:02.593805 1838578 status.go:255] checking status of ha-640807-m04 ...
	I0827 23:10:02.594131 1838578 cli_runner.go:164] Run: docker container inspect ha-640807-m04 --format={{.State.Status}}
	I0827 23:10:02.609678 1838578 status.go:330] ha-640807-m04 host status = "Stopped" (err=<nil>)
	I0827 23:10:02.609701 1838578 status.go:343] host is not running, skipping remaining checks
	I0827 23:10:02.609709 1838578 status.go:257] ha-640807-m04 status: &{Name:ha-640807-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (147.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-640807 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-640807 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m26.524840415s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (147.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (47.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-640807 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-640807 --control-plane -v=7 --alsologtostderr: (46.602044743s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-640807 status -v=7 --alsologtostderr: (1.070759528s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (47.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-696284 --driver=docker  --container-runtime=docker
E0827 23:13:45.530432 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-696284 --driver=docker  --container-runtime=docker: (31.087707531s)
--- PASS: TestImageBuild/serial/Setup (31.09s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-696284
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-696284: (1.996019527s)
--- PASS: TestImageBuild/serial/NormalBuild (2.00s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-696284
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-696284
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-696284
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-752489 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-752489 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (41.178753419s)
--- PASS: TestJSONOutput/start/Command (41.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-752489 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-752489 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-752489 --output=json --user=testUser
E0827 23:14:52.231059 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-752489 --output=json --user=testUser: (5.755247695s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-602164 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-602164 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.994781ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fcf86e72-b45b-47e4-98cc-a64590527c78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-602164] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8020c436-e20e-47d6-b5ee-dcc5ce522dd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19522"}}
	{"specversion":"1.0","id":"b36f3d41-167f-4979-8c80-cd29bb5d6441","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7de97676-2770-441e-8268-2ce4ef2e5640","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig"}}
	{"specversion":"1.0","id":"31fcd86a-a3cf-4a83-95f5-e5abe613177e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube"}}
	{"specversion":"1.0","id":"9b6162ae-b755-4feb-80a7-299abeec2daa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7b4f35a7-0094-4b47-bafb-28e8bd8ad2f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"119e37e0-d462-4746-9aa9-59e887934ccf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-602164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-602164
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-246304 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-246304 --network=: (33.524794336s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-246304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-246304
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-246304: (2.092909409s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.64s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-973036 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-973036 --network=bridge: (32.368071491s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-973036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-973036
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-973036: (1.621062397s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.02s)

                                                
                                    
x
+
TestKicExistingNetwork (34.29s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-263562 --network=existing-network
E0827 23:16:15.296673 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-263562 --network=existing-network: (32.162482138s)
helpers_test.go:175: Cleaning up "existing-network-263562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-263562
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-263562: (1.965968429s)
--- PASS: TestKicExistingNetwork (34.29s)

                                                
                                    
x
+
TestKicCustomSubnet (33.94s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-486476 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-486476 --subnet=192.168.60.0/24: (31.83455107s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-486476 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-486476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-486476
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-486476: (2.089449295s)
--- PASS: TestKicCustomSubnet (33.94s)

                                                
                                    
x
+
TestKicStaticIP (35.37s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-605035 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-605035 --static-ip=192.168.200.200: (33.096605074s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-605035 ip
helpers_test.go:175: Cleaning up "static-ip-605035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-605035
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-605035: (2.115101572s)
--- PASS: TestKicStaticIP (35.37s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-562253 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-562253 --driver=docker  --container-runtime=docker: (33.1392196s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-565100 --driver=docker  --container-runtime=docker
E0827 23:18:45.530525 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-565100 --driver=docker  --container-runtime=docker: (33.772475213s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-562253
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-565100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-565100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-565100
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-565100: (2.14841838s)
helpers_test.go:175: Cleaning up "first-562253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-562253
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-562253: (2.173743892s)
--- PASS: TestMinikubeProfile (72.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-333636 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-333636 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.829873926s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-333636 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-347363 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-347363 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.393293082s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-347363 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-333636 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-333636 --alsologtostderr -v=5: (1.478949499s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-347363 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-347363
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-347363: (1.223531891s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.48s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-347363
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-347363: (7.478167929s)
--- PASS: TestMountStart/serial/RestartStopped (8.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-347363 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-455802 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0827 23:19:52.230647 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:20:08.593662 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-455802 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.516937102s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (41.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-455802 -- rollout status deployment/busybox: (3.540019059s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- exec busybox-7dff88458-bp94p -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- exec busybox-7dff88458-l4hck -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- exec busybox-7dff88458-bp94p -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- exec busybox-7dff88458-l4hck -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- exec busybox-7dff88458-bp94p -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- exec busybox-7dff88458-l4hck -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (41.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- exec busybox-7dff88458-bp94p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- exec busybox-7dff88458-bp94p -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- exec busybox-7dff88458-l4hck -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-455802 -- exec busybox-7dff88458-l4hck -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-455802 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-455802 -v 3 --alsologtostderr: (16.025257554s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.84s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-455802 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp testdata/cp-test.txt multinode-455802:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp multinode-455802:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2147593112/001/cp-test_multinode-455802.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp multinode-455802:/home/docker/cp-test.txt multinode-455802-m02:/home/docker/cp-test_multinode-455802_multinode-455802-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m02 "sudo cat /home/docker/cp-test_multinode-455802_multinode-455802-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp multinode-455802:/home/docker/cp-test.txt multinode-455802-m03:/home/docker/cp-test_multinode-455802_multinode-455802-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m03 "sudo cat /home/docker/cp-test_multinode-455802_multinode-455802-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp testdata/cp-test.txt multinode-455802-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp multinode-455802-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2147593112/001/cp-test_multinode-455802-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp multinode-455802-m02:/home/docker/cp-test.txt multinode-455802:/home/docker/cp-test_multinode-455802-m02_multinode-455802.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802 "sudo cat /home/docker/cp-test_multinode-455802-m02_multinode-455802.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp multinode-455802-m02:/home/docker/cp-test.txt multinode-455802-m03:/home/docker/cp-test_multinode-455802-m02_multinode-455802-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m03 "sudo cat /home/docker/cp-test_multinode-455802-m02_multinode-455802-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp testdata/cp-test.txt multinode-455802-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp multinode-455802-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2147593112/001/cp-test_multinode-455802-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp multinode-455802-m03:/home/docker/cp-test.txt multinode-455802:/home/docker/cp-test_multinode-455802-m03_multinode-455802.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802 "sudo cat /home/docker/cp-test_multinode-455802-m03_multinode-455802.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 cp multinode-455802-m03:/home/docker/cp-test.txt multinode-455802-m02:/home/docker/cp-test_multinode-455802-m03_multinode-455802-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 ssh -n multinode-455802-m02 "sudo cat /home/docker/cp-test_multinode-455802-m03_multinode-455802-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-455802 node stop m03: (1.216109545s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-455802 status: exit status 7 (525.916802ms)

                                                
                                                
-- stdout --
	multinode-455802
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-455802-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-455802-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-455802 status --alsologtostderr: exit status 7 (532.294477ms)

                                                
                                                
-- stdout --
	multinode-455802
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-455802-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-455802-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:22:09.093344 1914034 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:22:09.093537 1914034 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:22:09.093566 1914034 out.go:358] Setting ErrFile to fd 2...
	I0827 23:22:09.093587 1914034 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:22:09.093860 1914034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
	I0827 23:22:09.094107 1914034 out.go:352] Setting JSON to false
	I0827 23:22:09.094182 1914034 mustload.go:65] Loading cluster: multinode-455802
	I0827 23:22:09.094253 1914034 notify.go:220] Checking for updates...
	I0827 23:22:09.094665 1914034 config.go:182] Loaded profile config "multinode-455802": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 23:22:09.094702 1914034 status.go:255] checking status of multinode-455802 ...
	I0827 23:22:09.095289 1914034 cli_runner.go:164] Run: docker container inspect multinode-455802 --format={{.State.Status}}
	I0827 23:22:09.115694 1914034 status.go:330] multinode-455802 host status = "Running" (err=<nil>)
	I0827 23:22:09.115721 1914034 host.go:66] Checking if "multinode-455802" exists ...
	I0827 23:22:09.116039 1914034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-455802
	I0827 23:22:09.140915 1914034 host.go:66] Checking if "multinode-455802" exists ...
	I0827 23:22:09.141232 1914034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:22:09.141291 1914034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-455802
	I0827 23:22:09.160353 1914034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/multinode-455802/id_rsa Username:docker}
	I0827 23:22:09.257685 1914034 ssh_runner.go:195] Run: systemctl --version
	I0827 23:22:09.262227 1914034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:22:09.274040 1914034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:22:09.349809 1914034 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-27 23:22:09.340116064 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:22:09.350388 1914034 kubeconfig.go:125] found "multinode-455802" server: "https://192.168.58.2:8443"
	I0827 23:22:09.350420 1914034 api_server.go:166] Checking apiserver status ...
	I0827 23:22:09.350475 1914034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:22:09.363308 1914034 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2234/cgroup
	I0827 23:22:09.373326 1914034 api_server.go:182] apiserver freezer: "9:freezer:/docker/53c629772e7c20af701d214b062db317c46ace6c07631e3da0074da00cacbad6/kubepods/burstable/pod70dbff4ef8e17c90fdb762ea8977ab6f/53b5d91dbc08151fec509834473407ca71b7c078c4a3abd20a8c6bfb928a7d74"
	I0827 23:22:09.373407 1914034 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/53c629772e7c20af701d214b062db317c46ace6c07631e3da0074da00cacbad6/kubepods/burstable/pod70dbff4ef8e17c90fdb762ea8977ab6f/53b5d91dbc08151fec509834473407ca71b7c078c4a3abd20a8c6bfb928a7d74/freezer.state
	I0827 23:22:09.382389 1914034 api_server.go:204] freezer state: "THAWED"
	I0827 23:22:09.382417 1914034 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0827 23:22:09.390365 1914034 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0827 23:22:09.390430 1914034 status.go:422] multinode-455802 apiserver status = Running (err=<nil>)
	I0827 23:22:09.390447 1914034 status.go:257] multinode-455802 status: &{Name:multinode-455802 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:22:09.390465 1914034 status.go:255] checking status of multinode-455802-m02 ...
	I0827 23:22:09.390799 1914034 cli_runner.go:164] Run: docker container inspect multinode-455802-m02 --format={{.State.Status}}
	I0827 23:22:09.407253 1914034 status.go:330] multinode-455802-m02 host status = "Running" (err=<nil>)
	I0827 23:22:09.407278 1914034 host.go:66] Checking if "multinode-455802-m02" exists ...
	I0827 23:22:09.407604 1914034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-455802-m02
	I0827 23:22:09.423903 1914034 host.go:66] Checking if "multinode-455802-m02" exists ...
	I0827 23:22:09.424280 1914034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:22:09.424336 1914034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-455802-m02
	I0827 23:22:09.441016 1914034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19522-1737862/.minikube/machines/multinode-455802-m02/id_rsa Username:docker}
	I0827 23:22:09.537919 1914034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:22:09.549849 1914034 status.go:257] multinode-455802-m02 status: &{Name:multinode-455802-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:22:09.549886 1914034 status.go:255] checking status of multinode-455802-m03 ...
	I0827 23:22:09.550208 1914034 cli_runner.go:164] Run: docker container inspect multinode-455802-m03 --format={{.State.Status}}
	I0827 23:22:09.567176 1914034 status.go:330] multinode-455802-m03 host status = "Stopped" (err=<nil>)
	I0827 23:22:09.567201 1914034 status.go:343] host is not running, skipping remaining checks
	I0827 23:22:09.567210 1914034 status.go:257] multinode-455802-m03 status: &{Name:multinode-455802-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-455802 node start m03 -v=7 --alsologtostderr: (10.455433708s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (100.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-455802
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-455802
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-455802: (22.654370824s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-455802 --wait=true -v=8 --alsologtostderr
E0827 23:23:45.530114 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-455802 --wait=true -v=8 --alsologtostderr: (1m18.110015991s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-455802
--- PASS: TestMultiNode/serial/RestartKeepsNodes (100.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-455802 node delete m03: (4.992648776s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-455802 stop: (21.360479408s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-455802 status: exit status 7 (88.770234ms)

                                                
                                                
-- stdout --
	multinode-455802
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-455802-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-455802 status --alsologtostderr: exit status 7 (87.119326ms)

                                                
                                                
-- stdout --
	multinode-455802
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-455802-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:24:28.901198 1927624 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:24:28.901308 1927624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:24:28.901318 1927624 out.go:358] Setting ErrFile to fd 2...
	I0827 23:24:28.901324 1927624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:24:28.901563 1927624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1737862/.minikube/bin
	I0827 23:24:28.901742 1927624 out.go:352] Setting JSON to false
	I0827 23:24:28.901796 1927624 mustload.go:65] Loading cluster: multinode-455802
	I0827 23:24:28.901894 1927624 notify.go:220] Checking for updates...
	I0827 23:24:28.902202 1927624 config.go:182] Loaded profile config "multinode-455802": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0827 23:24:28.902216 1927624 status.go:255] checking status of multinode-455802 ...
	I0827 23:24:28.902708 1927624 cli_runner.go:164] Run: docker container inspect multinode-455802 --format={{.State.Status}}
	I0827 23:24:28.920276 1927624 status.go:330] multinode-455802 host status = "Stopped" (err=<nil>)
	I0827 23:24:28.920297 1927624 status.go:343] host is not running, skipping remaining checks
	I0827 23:24:28.920304 1927624 status.go:257] multinode-455802 status: &{Name:multinode-455802 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:24:28.920338 1927624 status.go:255] checking status of multinode-455802-m02 ...
	I0827 23:24:28.920789 1927624 cli_runner.go:164] Run: docker container inspect multinode-455802-m02 --format={{.State.Status}}
	I0827 23:24:28.942178 1927624 status.go:330] multinode-455802-m02 host status = "Stopped" (err=<nil>)
	I0827 23:24:28.942203 1927624 status.go:343] host is not running, skipping remaining checks
	I0827 23:24:28.942212 1927624 status.go:257] multinode-455802-m02 status: &{Name:multinode-455802-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-455802 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0827 23:24:52.231153 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-455802 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (57.268898683s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-455802 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-455802
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-455802-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-455802-m02 --driver=docker  --container-runtime=docker: exit status 14 (94.908277ms)

                                                
                                                
-- stdout --
	* [multinode-455802-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-455802-m02' is duplicated with machine name 'multinode-455802-m02' in profile 'multinode-455802'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-455802-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-455802-m03 --driver=docker  --container-runtime=docker: (33.13022895s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-455802
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-455802: exit status 80 (357.77658ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-455802 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-455802-m03 already exists in multinode-455802-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-455802-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-455802-m03: (2.132526441s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.77s)

                                                
                                    
x
+
TestPreload (139.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-617668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-617668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m42.38131961s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-617668 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-617668 image pull gcr.io/k8s-minikube/busybox: (2.222281274s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-617668
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-617668: (10.843851533s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-617668 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-617668 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (21.56845691s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-617668 image list
helpers_test.go:175: Cleaning up "test-preload-617668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-617668
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-617668: (2.176451911s)
--- PASS: TestPreload (139.50s)

                                                
                                    
x
+
TestScheduledStopUnix (103.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-076071 --memory=2048 --driver=docker  --container-runtime=docker
E0827 23:28:45.530142 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-076071 --memory=2048 --driver=docker  --container-runtime=docker: (30.29993869s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-076071 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-076071 -n scheduled-stop-076071
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-076071 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-076071 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-076071 -n scheduled-stop-076071
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-076071
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-076071 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0827 23:29:52.230520 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-076071
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-076071: exit status 7 (71.652144ms)

                                                
                                                
-- stdout --
	scheduled-stop-076071
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-076071 -n scheduled-stop-076071
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-076071 -n scheduled-stop-076071: exit status 7 (74.606193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-076071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-076071
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-076071: (1.699619852s)
--- PASS: TestScheduledStopUnix (103.59s)

                                                
                                    
x
+
TestSkaffold (117.28s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3596422742 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-681808 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-681808 --memory=2600 --driver=docker  --container-runtime=docker: (31.170562608s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3596422742 run --minikube-profile skaffold-681808 --kube-context skaffold-681808 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3596422742 run --minikube-profile skaffold-681808 --kube-context skaffold-681808 --status-check=true --port-forward=false --interactive=false: (1m10.45749147s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5f4d6bfcc8-mrjpf" [3e178403-e3e6-427e-b81e-ba64ed8c6585] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003651324s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6968d8b9fd-vcl4r" [9c8c1d6b-1dae-466d-9809-87eb6236dfee] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004354554s
helpers_test.go:175: Cleaning up "skaffold-681808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-681808
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-681808: (3.150344947s)
--- PASS: TestSkaffold (117.28s)

                                                
                                    
x
+
TestInsufficientStorage (13.44s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-877442 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-877442 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.138055791s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"33d67410-6672-49b0-a93e-b54f2cd08963","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-877442] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a0a8bdc-a468-4efe-829b-743fdba06e79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19522"}}
	{"specversion":"1.0","id":"53b4cb82-2db5-46c7-a5a2-37256d313544","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b14a1d28-8a47-4637-9437-e5433abbd08d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig"}}
	{"specversion":"1.0","id":"0a6ab300-6087-4f9e-91af-5e7af41c4dfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube"}}
	{"specversion":"1.0","id":"a20b60d3-981e-4194-bfc0-b953932e3b7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a98f07e3-8790-4f7c-b839-a83cb5d72324","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5d01d85f-8528-4175-8758-d977dc1fc8ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8216379d-382b-407e-b964-34ec8b318dd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"dcafe4fd-fb44-412a-a0d4-f67d16699db0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1188e95-0a59-4ca2-a4be-2addc53351f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0eb1cc08-6391-4402-8449-91ad37798f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-877442\" primary control-plane node in \"insufficient-storage-877442\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d405637-b622-484e-8262-d9a7ccd4ee10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724667927-19511 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"31f4eb03-70eb-4fd0-affd-2123263f483e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"dc83df1e-66ac-4d78-9362-bb5c30abb903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-877442 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-877442 --output=json --layout=cluster: exit status 7 (295.323828ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-877442","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-877442","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 23:32:18.729431 1961892 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-877442" does not appear in /home/jenkins/minikube-integration/19522-1737862/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-877442 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-877442 --output=json --layout=cluster: exit status 7 (298.274842ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-877442","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-877442","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 23:32:19.026490 1961953 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-877442" does not appear in /home/jenkins/minikube-integration/19522-1737862/kubeconfig
	E0827 23:32:19.039212 1961953 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/insufficient-storage-877442/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-877442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-877442
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-877442: (1.704973163s)
--- PASS: TestInsufficientStorage (13.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (104.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2983980238 start -p running-upgrade-799655 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2983980238 start -p running-upgrade-799655 --memory=2200 --vm-driver=docker  --container-runtime=docker: (59.314808309s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-799655 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0827 23:41:53.136050 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:42:20.839183 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-799655 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.70168673s)
helpers_test.go:175: Cleaning up "running-upgrade-799655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-799655
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-799655: (2.196971324s)
--- PASS: TestRunningBinaryUpgrade (104.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (380.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-339252 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-339252 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m6.021141799s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-339252
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-339252: (1.353309464s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-339252 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-339252 status --format={{.Host}}: exit status 7 (93.873426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-339252 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-339252 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m39.10591019s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-339252 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-339252 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-339252 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (98.162487ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-339252] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-339252
	    minikube start -p kubernetes-upgrade-339252 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3392522 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-339252 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-339252 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-339252 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.110797837s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-339252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-339252
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-339252: (2.634402098s)
--- PASS: TestKubernetesUpgrade (380.51s)

                                                
                                    
x
+
TestMissingContainerUpgrade (115.15s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1089204663 start -p missing-upgrade-618549 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1089204663 start -p missing-upgrade-618549 --memory=2200 --driver=docker  --container-runtime=docker: (37.696124787s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-618549
E0827 23:39:36.997018 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-618549: (10.368920598s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-618549
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-618549 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0827 23:39:52.230957 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-618549 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.154055236s)
helpers_test.go:175: Cleaning up "missing-upgrade-618549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-618549
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-618549: (2.463480788s)
--- PASS: TestMissingContainerUpgrade (115.15s)

                                                
                                    
x
+
TestPause/serial/Start (80.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-677388 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0827 23:32:55.297977 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-677388 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m20.430056693s)
--- PASS: TestPause/serial/Start (80.43s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-677388 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-677388 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.390246497s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-586966 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-586966 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (83.692569ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-586966] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1737862/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1737862/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-586966 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-586966 --driver=docker  --container-runtime=docker: (37.09207709s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-586966 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.59s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-677388 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-677388 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-677388 --output=json --layout=cluster: exit status 2 (416.737057ms)

                                                
                                                
-- stdout --
	{"Name":"pause-677388","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-677388","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-677388 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-677388 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.24s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-677388 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-677388 --alsologtostderr -v=5: (2.241931935s)
--- PASS: TestPause/serial/DeletePaused (2.24s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-677388
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-677388: exit status 1 (64.12541ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-677388: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-586966 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-586966 --no-kubernetes --driver=docker  --container-runtime=docker: (17.26062566s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-586966 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-586966 status -o json: exit status 2 (440.756546ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-586966","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-586966
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-586966: (1.882176135s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-586966 --no-kubernetes --driver=docker  --container-runtime=docker
E0827 23:34:52.230767 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-586966 --no-kubernetes --driver=docker  --container-runtime=docker: (8.776460327s)
--- PASS: TestNoKubernetes/serial/Start (8.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-586966 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-586966 "sudo systemctl is-active --quiet service kubelet": exit status 1 (351.551883ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-586966
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-586966: (1.285130437s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-586966 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-586966 --driver=docker  --container-runtime=docker: (9.2312155s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-586966 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-586966 "sudo systemctl is-active --quiet service kubelet": exit status 1 (360.599274ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (128.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3209186335 start -p stopped-upgrade-356495 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0827 23:36:48.594993 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:36:53.136421 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:36:53.143001 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:36:53.154512 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:36:53.175869 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:36:53.217253 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:36:53.298959 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:36:53.460994 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:36:53.782610 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:36:54.424568 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:36:55.707078 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:36:58.269028 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:37:03.391013 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:37:13.632334 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:37:34.113736 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3209186335 start -p stopped-upgrade-356495 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m27.751744506s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3209186335 -p stopped-upgrade-356495 stop
E0827 23:38:15.075691 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3209186335 -p stopped-upgrade-356495 stop: (10.833765447s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-356495 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0827 23:38:45.531096 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-356495 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.274259756s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (128.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-356495
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-356495: (1.484233963s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0827 23:43:45.530752 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m22.181368173s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-312459 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-312459 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5rgcn" [f4b71f01-5228-4681-b29b-ee901f471cfe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5rgcn" [f4b71f01-5228-4681-b29b-ee901f471cfe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004866589s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-312459 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0827 23:44:52.230376 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (55.645482985s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-st6x4" [813d4710-894a-4b1f-84e0-742c37740a5e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004042313s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-312459 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-312459 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ktdzs" [1d87396e-2915-4f72-9fae-2f6c64b6ee27] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ktdzs" [1d87396e-2915-4f72-9fae-2f6c64b6ee27] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004354101s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-312459 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m24.654577593s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0827 23:46:53.136386 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m6.083260245s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-312459 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-312459 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z6rxv" [5b65273c-f578-49aa-9b30-45c21cfcfddc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z6rxv" [5b65273c-f578-49aa-9b30-45c21cfcfddc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00619427s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-q5886" [28fbdbd1-f0a6-4763-8e85-84bcf812e9e5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0052676s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-312459 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-312459 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-85kd4" [52c6f24e-f9c5-4c26-a179-bf90726c27b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-85kd4" [52c6f24e-f9c5-4c26-a179-bf90726c27b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004667013s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-312459 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-312459 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (59.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (59.273110401s)
--- PASS: TestNetworkPlugins/group/false/Start (59.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (47.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0827 23:48:45.531084 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:49:00.693165 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:49:00.699579 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:49:00.710951 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:49:00.732313 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:49:00.773856 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:49:00.855231 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:49:01.016885 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:49:01.338581 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:49:01.980596 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:49:03.262678 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:49:05.825050 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (47.525723982s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (47.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-312459 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-312459 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dvgd7" [0f324f36-562c-482d-abec-26062f9dfba0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dvgd7" [0f324f36-562c-482d-abec-26062f9dfba0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004450249s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-312459 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-312459 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4299j" [adbb45fa-ff0b-421e-8174-76ca4b4b4b8a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0827 23:49:10.946884 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-4299j" [adbb45fa-ff0b-421e-8174-76ca4b4b4b8a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004730427s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-312459 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-312459 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m7.657898006s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0827 23:49:52.231361 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:22.633531 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:27.993716 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:28.000087 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:28.011525 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:28.033097 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:28.075571 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:28.157340 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:28.318851 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:28.640200 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:29.282045 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:30.563736 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:33.125043 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:38.247287 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:48.488607 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m22.393832086s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5gl6z" [a66a1c7b-4bb1-489c-a9e5-f1374bdad165] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004744989s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-312459 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-312459 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f49q7" [2a4111ae-6b0d-4aad-8978-515bf85b1ecf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f49q7" [2a4111ae-6b0d-4aad-8978-515bf85b1ecf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.008724532s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-312459 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-312459 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m4cjc" [be7bbe12-74b5-4c1d-aa5c-c8df20991c0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m4cjc" [be7bbe12-74b5-4c1d-aa5c-c8df20991c0f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.010739285s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-312459 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-312459 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (86.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-312459 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m26.971682839s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (86.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (130.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-165195 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0827 23:51:44.554873 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:51:49.933110 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:51:53.136794 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:30.763930 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:30.770299 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:30.781676 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:30.803032 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:30.844404 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:30.928776 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:31.090160 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:31.411762 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:32.053845 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:33.335641 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:33.898406 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:33.904713 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:33.916088 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:33.937441 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:33.978802 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:34.060163 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:34.221708 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:34.543368 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:35.185559 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:35.897859 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:36.467638 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:39.029306 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:41.019643 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:44.150754 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:51.260961 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:52:54.393073 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-165195 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m10.329234943s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (130.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-312459 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-312459 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tr785" [c049796a-9af3-4b74-b188-13b9a79abc03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tr785" [c049796a-9af3-4b74-b188-13b9a79abc03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003320897s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-312459 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-312459 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)
E0828 00:04:52.231096 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:05:23.758487 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-305796 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0827 23:53:45.531001 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:53:52.704761 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-305796 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m16.484272542s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (76.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-165195 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5ed8b12a-ac18-4e95-a67a-dc5a7c580e26] Pending
helpers_test.go:344: "busybox" [5ed8b12a-ac18-4e95-a67a-dc5a7c580e26] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0827 23:53:55.836677 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5ed8b12a-ac18-4e95-a67a-dc5a7c580e26] Running
E0827 23:54:00.693106 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004663966s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-165195 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-165195 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-165195 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.097238975s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-165195 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-165195 --alsologtostderr -v=3
E0827 23:54:06.835615 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:06.841991 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:06.853394 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:06.874751 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:06.916937 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:07.000128 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:07.162267 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:07.483512 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:07.559965 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:07.566285 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:07.577679 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:07.599013 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:07.640624 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:07.721917 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:07.883324 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:08.125083 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:08.204753 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:08.846575 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:09.406868 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:10.127920 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:11.968711 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:12.690060 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-165195 --alsologtostderr -v=3: (10.950897142s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-165195 -n old-k8s-version-165195
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-165195 -n old-k8s-version-165195: exit status 7 (77.226918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-165195 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (121.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-165195 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0827 23:54:17.090501 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:17.811561 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:27.332885 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:28.053447 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:54:28.396732 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-165195 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m0.726690436s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-165195 -n old-k8s-version-165195
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (121.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-305796 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4949c518-b637-4c8e-ab28-2e81727031cb] Pending
E0827 23:54:47.814243 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [4949c518-b637-4c8e-ab28-2e81727031cb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0827 23:54:48.535483 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [4949c518-b637-4c8e-ab28-2e81727031cb] Running
E0827 23:54:52.230898 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/addons-958846/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004064537s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-305796 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-305796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-305796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.279011519s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-305796 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-305796 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-305796 --alsologtostderr -v=3: (11.206536133s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-305796 -n embed-certs-305796
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-305796 -n embed-certs-305796: exit status 7 (75.818916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-305796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-305796 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0827 23:55:14.626887 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:17.758899 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:27.994396 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:28.776359 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:29.497686 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:50.388338 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:50.394821 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:50.406208 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:50.427630 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:50.469058 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:50.550481 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:50.712162 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:51.034063 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:51.676140 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:52.957442 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:55.518799 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:55.696434 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:00.641234 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:07.171570 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:07.178007 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:07.189535 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:07.211043 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:07.252616 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:07.334165 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:07.495715 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:07.817078 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:08.459223 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:09.741040 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:10.883540 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:12.302781 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-305796 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m27.072778861s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-305796 -n embed-certs-305796
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ff8tw" [a64d764b-a82a-4a95-be48-96fe03ad7243] Running
E0827 23:56:17.424874 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004347131s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ff8tw" [a64d764b-a82a-4a95-be48-96fe03ad7243] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003876152s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-165195 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-165195 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-165195 --alsologtostderr -v=1
E0827 23:56:27.667284 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-165195 -n old-k8s-version-165195
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-165195 -n old-k8s-version-165195: exit status 2 (325.419058ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-165195 -n old-k8s-version-165195
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-165195 -n old-k8s-version-165195: exit status 2 (325.005881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-165195 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-165195 -n old-k8s-version-165195
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-165195 -n old-k8s-version-165195
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (87.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-302556 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0827 23:56:48.149516 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:50.698738 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:51.419416 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:56:53.137022 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:12.327661 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:29.111797 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:30.763748 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:33.898254 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:58.468732 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:58.907873 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:58.914211 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:58.926328 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:58.947826 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:58.989285 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:59.070763 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:59.232268 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:57:59.553811 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:00.195953 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-302556 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m27.336221902s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (87.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-302556 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fdb7e517-30cf-4689-a29a-37c418838c7e] Pending
E0827 23:58:01.487699 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [fdb7e517-30cf-4689-a29a-37c418838c7e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0827 23:58:01.600697 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:04.049788 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [fdb7e517-30cf-4689-a29a-37c418838c7e] Running
E0827 23:58:09.172004 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004204348s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-302556 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-302556 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-302556 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-302556 --alsologtostderr -v=3
E0827 23:58:19.413685 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-302556 --alsologtostderr -v=3: (10.968812775s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-302556 -n no-preload-302556
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-302556 -n no-preload-302556: exit status 7 (64.38914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-302556 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-302556 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0827 23:58:34.249072 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:39.895567 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:45.530254 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:51.034526 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:53.217204 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:53.223665 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:53.235271 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:53.256634 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:53.298048 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:53.379429 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:53.541005 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:53.862639 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:54.504702 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:55.786480 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:58:58.348621 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:59:00.692716 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:59:03.470410 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:59:06.835718 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:59:07.560638 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:59:13.711784 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:59:20.857190 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:59:34.193686 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:59:34.540788 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:59:35.261568 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-302556 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m26.341540962s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-302556 -n no-preload-302556
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-45656" [d4d3a5b4-bddc-4ee9-a17b-a7121fe80344] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003741305s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-45656" [d4d3a5b4-bddc-4ee9-a17b-a7121fe80344] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0041423s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-305796 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-305796 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-305796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-305796 -n embed-certs-305796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-305796 -n embed-certs-305796: exit status 2 (337.693032ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-305796 -n embed-certs-305796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-305796 -n embed-certs-305796: exit status 2 (361.824014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-305796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-305796 -n embed-certs-305796
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-305796 -n embed-certs-305796
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-841323 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0828 00:00:15.155514 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:00:27.993776 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-841323 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (45.037535329s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-841323 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ec93d279-5f5c-4c9c-bbc4-a70b1fcd0354] Pending
helpers_test.go:344: "busybox" [ec93d279-5f5c-4c9c-bbc4-a70b1fcd0354] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ec93d279-5f5c-4c9c-bbc4-a70b1fcd0354] Running
E0828 00:00:42.778582 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004276332s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-841323 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-841323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-841323 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022742163s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-841323 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-841323 --alsologtostderr -v=3
E0828 00:00:50.387811 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-841323 --alsologtostderr -v=3: (10.838849818s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-841323 -n default-k8s-diff-port-841323
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-841323 -n default-k8s-diff-port-841323: exit status 7 (72.293189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-841323 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-841323 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0828 00:01:07.170961 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:18.091011 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:34.876386 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/bridge-312459/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:37.077255 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:53.137232 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/skaffold-681808/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:30.763385 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/custom-flannel-312459/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:33.898642 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/calico-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-841323 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m26.309477747s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-841323 -n default-k8s-diff-port-841323
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q5xnv" [bf666244-f614-48dc-8fa8-7ddd7ca1ba6f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00324343s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q5xnv" [bf666244-f614-48dc-8fa8-7ddd7ca1ba6f] Running
E0828 00:02:58.907545 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003312579s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-302556 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-302556 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-302556 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-302556 -n no-preload-302556
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-302556 -n no-preload-302556: exit status 2 (320.653368ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-302556 -n no-preload-302556
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-302556 -n no-preload-302556: exit status 2 (341.709105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-302556 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-302556 -n no-preload-302556
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-302556 -n no-preload-302556
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-942451 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0828 00:03:26.620863 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kubenet-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-942451 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (39.705548101s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-942451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0828 00:03:45.531509 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/functional-300627/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-942451 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.115027451s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-942451 --alsologtostderr -v=3
E0828 00:03:53.218065 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/old-k8s-version-165195/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-942451 --alsologtostderr -v=3: (11.026807299s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-942451 -n newest-cni-942451
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-942451 -n newest-cni-942451: exit status 7 (70.939135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-942451 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-942451 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0828 00:04:00.692517 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/auto-312459/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:04:06.835717 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/enable-default-cni-312459/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:04:07.560319 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/false-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-942451 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (17.898801048s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-942451 -n newest-cni-942451
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-942451 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-942451 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-942451 -n newest-cni-942451
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-942451 -n newest-cni-942451: exit status 2 (368.797054ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-942451 -n newest-cni-942451
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-942451 -n newest-cni-942451: exit status 2 (376.239142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-942451 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-942451 -n newest-cni-942451
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-942451 -n newest-cni-942451
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bfx69" [15666a6e-6959-4823-8382-2d336e560ef8] Running
E0828 00:05:27.993833 1743249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/kindnet-312459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003654441s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bfx69" [15666a6e-6959-4823-8382-2d336e560ef8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004060081s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-841323 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-841323 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-841323 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-841323 -n default-k8s-diff-port-841323
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-841323 -n default-k8s-diff-port-841323: exit status 2 (315.967604ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-841323 -n default-k8s-diff-port-841323
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-841323 -n default-k8s-diff-port-841323: exit status 2 (325.317743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-841323 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-841323 -n default-k8s-diff-port-841323
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-841323 -n default-k8s-diff-port-841323
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.86s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.51s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-337482 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-337482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-337482
--- SKIP: TestDownloadOnlyKic (0.51s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-312459 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-312459" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19522-1737862/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 27 Aug 2024 23:34:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-586966
contexts:
- context:
cluster: NoKubernetes-586966
extensions:
- extension:
last-update: Tue, 27 Aug 2024 23:34:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: NoKubernetes-586966
name: NoKubernetes-586966
current-context: NoKubernetes-586966
kind: Config
preferences: {}
users:
- name: NoKubernetes-586966
user:
client-certificate: /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/NoKubernetes-586966/client.crt
client-key: /home/jenkins/minikube-integration/19522-1737862/.minikube/profiles/NoKubernetes-586966/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-312459

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-312459" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-312459"

                                                
                                                
----------------------- debugLogs end: cilium-312459 [took: 5.859104154s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-312459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-312459
--- SKIP: TestNetworkPlugins/group/cilium (6.04s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-555775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-555775
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard