Test Report: Docker_Linux_docker_arm64 19662

                    
                      3f64d3c641e64b460ff7a3cff080aebef74ca5ca:2024-09-17:36258
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 76.48
x
+
TestAddons/parallel/Registry (76.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.463094ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-zt9dz" [e7f2fc50-5c03-4aec-9040-85d9963af8e6] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006190148s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r92r6" [5d64f5cf-2b0e-40f7-88ca-5822f9941c5a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004741338s
addons_test.go:342: (dbg) Run:  kubectl --context addons-731605 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-731605 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-731605 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.114054061s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-731605 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-731605
helpers_test.go:235: (dbg) docker inspect addons-731605:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62",
	        "Created": "2024-09-17T16:56:21.054930089Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8820,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T16:56:21.228982647Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62/hostname",
	        "HostsPath": "/var/lib/docker/containers/e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62/hosts",
	        "LogPath": "/var/lib/docker/containers/e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62/e9b97591b363ab63b16c040cdde39d31d167b8245adc3ac2186ca3175ab45e62-json.log",
	        "Name": "/addons-731605",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-731605:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-731605",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ab4d6a97f15752e23e61ca964f5cab8f417af512274b7a21961dbd22c4e22911-init/diff:/var/lib/docker/overlay2/661d29c6509a75bb24f7ab0157c48263e53b9e4426011b7a7b71a55adee7d7b7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ab4d6a97f15752e23e61ca964f5cab8f417af512274b7a21961dbd22c4e22911/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ab4d6a97f15752e23e61ca964f5cab8f417af512274b7a21961dbd22c4e22911/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ab4d6a97f15752e23e61ca964f5cab8f417af512274b7a21961dbd22c4e22911/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-731605",
	                "Source": "/var/lib/docker/volumes/addons-731605/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-731605",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-731605",
	                "name.minikube.sigs.k8s.io": "addons-731605",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a36375090e0a8067406466fa321e9b2daabbf67ac5628f22d28883325fc6b84",
	            "SandboxKey": "/var/run/docker/netns/5a36375090e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-731605": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "4e1264060639f1108fea50fdf8b216f0e6b32a99ca56b1ad2099317731b4a5b0",
	                    "EndpointID": "d8f62c04227b62c91a980f4712453ef6cf32f2a9383aa293d115fdff002c4592",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-731605",
	                        "e9b97591b363"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-731605 -n addons-731605
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-731605 logs -n 25: (1.411554742s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-017300   | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-017300                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-017300                                                                     | download-only-017300   | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | -o=json --download-only                                                                     | download-only-253478   | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-253478                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-253478                                                                     | download-only-253478   | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-017300                                                                     | download-only-017300   | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-253478                                                                     | download-only-253478   | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | --download-only -p                                                                          | download-docker-449671 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | download-docker-449671                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-449671                                                                   | download-docker-449671 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-460466   | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | binary-mirror-460466                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37897                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-460466                                                                     | binary-mirror-460466   | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| addons  | disable dashboard -p                                                                        | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | addons-731605                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | addons-731605                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-731605 --wait=true                                                                | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-731605 addons disable                                                                | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 17:00 UTC | 17 Sep 24 17:00 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-731605 addons disable                                                                | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 17:08 UTC | 17 Sep 24 17:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-731605 addons                                                                        | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-731605 addons                                                                        | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | -p addons-731605                                                                            |                        |         |         |                     |                     |
	| ip      | addons-731605 ip                                                                            | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	| addons  | addons-731605 addons disable                                                                | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-731605 ssh cat                                                                       | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | /opt/local-path-provisioner/pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-731605 addons disable                                                                | addons-731605          | jenkins | v1.34.0 | 17 Sep 24 17:09 UTC | 17 Sep 24 17:09 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:56
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:56.261150    8324 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:56.261363    8324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:56.261390    8324 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:56.261412    8324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:56.261693    8324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
	I0917 16:55:56.262221    8324 out.go:352] Setting JSON to false
	I0917 16:55:56.263067    8324 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2303,"bootTime":1726589854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 16:55:56.263171    8324 start.go:139] virtualization:  
	I0917 16:55:56.266206    8324 out.go:177] * [addons-731605] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0917 16:55:56.268807    8324 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 16:55:56.268859    8324 notify.go:220] Checking for updates...
	I0917 16:55:56.273096    8324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:56.275593    8324 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig
	I0917 16:55:56.277853    8324 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube
	I0917 16:55:56.279816    8324 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 16:55:56.281828    8324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 16:55:56.284349    8324 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:56.306305    8324 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 16:55:56.306436    8324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:56.369425    8324 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-17 16:55:56.359203789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 16:55:56.369541    8324 docker.go:318] overlay module found
	I0917 16:55:56.373244    8324 out.go:177] * Using the docker driver based on user configuration
	I0917 16:55:56.375154    8324 start.go:297] selected driver: docker
	I0917 16:55:56.375171    8324 start.go:901] validating driver "docker" against <nil>
	I0917 16:55:56.375186    8324 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 16:55:56.375876    8324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:56.430364    8324 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-17 16:55:56.42070162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 16:55:56.430574    8324 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:55:56.430811    8324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:55:56.432904    8324 out.go:177] * Using Docker driver with root privileges
	I0917 16:55:56.434914    8324 cni.go:84] Creating CNI manager for ""
	I0917 16:55:56.434980    8324 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:55:56.434994    8324 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:55:56.435064    8324 start.go:340] cluster config:
	{Name:addons-731605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-731605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:55:56.437716    8324 out.go:177] * Starting "addons-731605" primary control-plane node in "addons-731605" cluster
	I0917 16:55:56.439884    8324 cache.go:121] Beginning downloading kic base image for docker with docker
	I0917 16:55:56.442514    8324 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0917 16:55:56.444890    8324 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:55:56.444945    8324 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 16:55:56.444956    8324 cache.go:56] Caching tarball of preloaded images
	I0917 16:55:56.444990    8324 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 16:55:56.445058    8324 preload.go:172] Found /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 16:55:56.445070    8324 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 16:55:56.445482    8324 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/config.json ...
	I0917 16:55:56.445514    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/config.json: {Name:mkcd6dda44a0dbe49e232a889ca4c689e63d6c22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:55:56.460939    8324 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:55:56.461075    8324 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 16:55:56.461099    8324 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0917 16:55:56.461109    8324 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0917 16:55:56.461117    8324 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0917 16:55:56.461123    8324 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0917 16:56:14.134456    8324 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0917 16:56:14.134495    8324 cache.go:194] Successfully downloaded all kic artifacts
	I0917 16:56:14.134532    8324 start.go:360] acquireMachinesLock for addons-731605: {Name:mk85601fc5fe208ad3ac2f2740b3e068a6bf1f0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:56:14.134659    8324 start.go:364] duration metric: took 106.846µs to acquireMachinesLock for "addons-731605"
	I0917 16:56:14.134705    8324 start.go:93] Provisioning new machine with config: &{Name:addons-731605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-731605 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 16:56:14.134788    8324 start.go:125] createHost starting for "" (driver="docker")
	I0917 16:56:14.137281    8324 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0917 16:56:14.137594    8324 start.go:159] libmachine.API.Create for "addons-731605" (driver="docker")
	I0917 16:56:14.137631    8324 client.go:168] LocalClient.Create starting
	I0917 16:56:14.137760    8324 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem
	I0917 16:56:14.646320    8324 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/cert.pem
	I0917 16:56:14.922390    8324 cli_runner.go:164] Run: docker network inspect addons-731605 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 16:56:14.940931    8324 cli_runner.go:211] docker network inspect addons-731605 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 16:56:14.941018    8324 network_create.go:284] running [docker network inspect addons-731605] to gather additional debugging logs...
	I0917 16:56:14.941043    8324 cli_runner.go:164] Run: docker network inspect addons-731605
	W0917 16:56:14.955493    8324 cli_runner.go:211] docker network inspect addons-731605 returned with exit code 1
	I0917 16:56:14.955524    8324 network_create.go:287] error running [docker network inspect addons-731605]: docker network inspect addons-731605: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-731605 not found
	I0917 16:56:14.955543    8324 network_create.go:289] output of [docker network inspect addons-731605]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-731605 not found
	
	** /stderr **
	I0917 16:56:14.955654    8324 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 16:56:14.972264    8324 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c1c4b0}
	I0917 16:56:14.972300    8324 network_create.go:124] attempt to create docker network addons-731605 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 16:56:14.972352    8324 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-731605 addons-731605
	I0917 16:56:15.062238    8324 network_create.go:108] docker network addons-731605 192.168.49.0/24 created
	I0917 16:56:15.062278    8324 kic.go:121] calculated static IP "192.168.49.2" for the "addons-731605" container
	I0917 16:56:15.062366    8324 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 16:56:15.080021    8324 cli_runner.go:164] Run: docker volume create addons-731605 --label name.minikube.sigs.k8s.io=addons-731605 --label created_by.minikube.sigs.k8s.io=true
	I0917 16:56:15.100611    8324 oci.go:103] Successfully created a docker volume addons-731605
	I0917 16:56:15.100720    8324 cli_runner.go:164] Run: docker run --rm --name addons-731605-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-731605 --entrypoint /usr/bin/test -v addons-731605:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0917 16:56:17.217017    8324 cli_runner.go:217] Completed: docker run --rm --name addons-731605-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-731605 --entrypoint /usr/bin/test -v addons-731605:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (2.116255038s)
	I0917 16:56:17.217043    8324 oci.go:107] Successfully prepared a docker volume addons-731605
	I0917 16:56:17.217076    8324 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:56:17.217095    8324 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 16:56:17.217159    8324 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-731605:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 16:56:20.983953    8324 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-731605:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.76673548s)
	I0917 16:56:20.983986    8324 kic.go:203] duration metric: took 3.766887634s to extract preloaded images to volume ...
	W0917 16:56:20.984125    8324 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0917 16:56:20.984246    8324 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 16:56:21.039190    8324 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-731605 --name addons-731605 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-731605 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-731605 --network addons-731605 --ip 192.168.49.2 --volume addons-731605:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0917 16:56:21.411232    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Running}}
	I0917 16:56:21.435156    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:21.457628    8324 cli_runner.go:164] Run: docker exec addons-731605 stat /var/lib/dpkg/alternatives/iptables
	I0917 16:56:21.517937    8324 oci.go:144] the created container "addons-731605" has a running status.
	I0917 16:56:21.517962    8324 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa...
	I0917 16:56:22.183649    8324 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 16:56:22.210950    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:22.231854    8324 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 16:56:22.231877    8324 kic_runner.go:114] Args: [docker exec --privileged addons-731605 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 16:56:22.311567    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:22.337021    8324 machine.go:93] provisionDockerMachine start ...
	I0917 16:56:22.337170    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:22.360127    8324 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:22.360382    8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:22.360398    8324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 16:56:22.507172    8324 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-731605
	
	I0917 16:56:22.507254    8324 ubuntu.go:169] provisioning hostname "addons-731605"
	I0917 16:56:22.507350    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:22.528932    8324 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:22.529177    8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:22.529189    8324 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-731605 && echo "addons-731605" | sudo tee /etc/hostname
	I0917 16:56:22.691965    8324 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-731605
	
	I0917 16:56:22.692110    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:22.709150    8324 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:22.709392    8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:22.709413    8324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-731605' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-731605/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-731605' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 16:56:22.855764    8324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 16:56:22.855792    8324 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19662-2253/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-2253/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-2253/.minikube}
	I0917 16:56:22.855820    8324 ubuntu.go:177] setting up certificates
	I0917 16:56:22.855830    8324 provision.go:84] configureAuth start
	I0917 16:56:22.855897    8324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-731605
	I0917 16:56:22.873731    8324 provision.go:143] copyHostCerts
	I0917 16:56:22.873830    8324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-2253/.minikube/ca.pem (1078 bytes)
	I0917 16:56:22.873960    8324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-2253/.minikube/cert.pem (1123 bytes)
	I0917 16:56:22.874035    8324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-2253/.minikube/key.pem (1679 bytes)
	I0917 16:56:22.874101    8324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-2253/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca-key.pem org=jenkins.addons-731605 san=[127.0.0.1 192.168.49.2 addons-731605 localhost minikube]
	I0917 16:56:23.842473    8324 provision.go:177] copyRemoteCerts
	I0917 16:56:23.842547    8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 16:56:23.842586    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:23.863868    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:23.968396    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 16:56:23.992842    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 16:56:24.017129    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 16:56:24.048636    8324 provision.go:87] duration metric: took 1.192786541s to configureAuth
	I0917 16:56:24.048707    8324 ubuntu.go:193] setting minikube options for container-runtime
	I0917 16:56:24.048930    8324 config.go:182] Loaded profile config "addons-731605": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 16:56:24.048994    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:24.071220    8324 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:24.071465    8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:24.071483    8324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 16:56:24.220246    8324 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 16:56:24.220267    8324 ubuntu.go:71] root file system type: overlay
	I0917 16:56:24.220399    8324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 16:56:24.220474    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:24.239242    8324 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:24.239503    8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:24.239595    8324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 16:56:24.403638    8324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 16:56:24.403831    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:24.421172    8324 main.go:141] libmachine: Using SSH client type: native
	I0917 16:56:24.421420    8324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0917 16:56:24.421444    8324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 16:56:25.220657    8324 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-17 16:56:24.398097158 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0917 16:56:25.220689    8324 machine.go:96] duration metric: took 2.883599615s to provisionDockerMachine
	I0917 16:56:25.220700    8324 client.go:171] duration metric: took 11.083056604s to LocalClient.Create
	I0917 16:56:25.220713    8324 start.go:167] duration metric: took 11.083120529s to libmachine.API.Create "addons-731605"
	I0917 16:56:25.220720    8324 start.go:293] postStartSetup for "addons-731605" (driver="docker")
	I0917 16:56:25.220731    8324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 16:56:25.220804    8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 16:56:25.220844    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:25.237541    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:25.336887    8324 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 16:56:25.340081    8324 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 16:56:25.340119    8324 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 16:56:25.340149    8324 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 16:56:25.340161    8324 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 16:56:25.340173    8324 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-2253/.minikube/addons for local assets ...
	I0917 16:56:25.340249    8324 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-2253/.minikube/files for local assets ...
	I0917 16:56:25.340278    8324 start.go:296] duration metric: took 119.551153ms for postStartSetup
	I0917 16:56:25.340602    8324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-731605
	I0917 16:56:25.356953    8324 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/config.json ...
	I0917 16:56:25.357250    8324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 16:56:25.357299    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:25.373597    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:25.472150    8324 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 16:56:25.476661    8324 start.go:128] duration metric: took 11.341853801s to createHost
	I0917 16:56:25.476685    8324 start.go:83] releasing machines lock for "addons-731605", held for 11.342014791s
	I0917 16:56:25.476756    8324 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-731605
	I0917 16:56:25.492799    8324 ssh_runner.go:195] Run: cat /version.json
	I0917 16:56:25.492860    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:25.493117    8324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 16:56:25.493197    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:25.512350    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:25.514436    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:25.607729    8324 ssh_runner.go:195] Run: systemctl --version
	I0917 16:56:25.734508    8324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 16:56:25.739190    8324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 16:56:25.769219    8324 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 16:56:25.769310    8324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 16:56:25.801320    8324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 16:56:25.801348    8324 start.go:495] detecting cgroup driver to use...
	I0917 16:56:25.801387    8324 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 16:56:25.801494    8324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 16:56:25.818450    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 16:56:25.829039    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 16:56:25.839256    8324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 16:56:25.839380    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 16:56:25.850292    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 16:56:25.860194    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 16:56:25.871125    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 16:56:25.882105    8324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 16:56:25.892792    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 16:56:25.904790    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 16:56:25.916695    8324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 16:56:25.927314    8324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 16:56:25.936517    8324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 16:56:25.945309    8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:26.031256    8324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 16:56:26.133657    8324 start.go:495] detecting cgroup driver to use...
	I0917 16:56:26.133725    8324 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 16:56:26.133792    8324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 16:56:26.149696    8324 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0917 16:56:26.149769    8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 16:56:26.164520    8324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 16:56:26.185313    8324 ssh_runner.go:195] Run: which cri-dockerd
	I0917 16:56:26.192150    8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 16:56:26.202924    8324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 16:56:26.221456    8324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 16:56:26.325669    8324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 16:56:26.421083    8324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 16:56:26.421223    8324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 16:56:26.443025    8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:26.545707    8324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 16:56:26.811676    8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 16:56:26.825180    8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 16:56:26.837988    8324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 16:56:26.936983    8324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 16:56:27.030049    8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:27.124381    8324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 16:56:27.139229    8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 16:56:27.151339    8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:27.242132    8324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 16:56:27.311721    8324 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 16:56:27.311903    8324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 16:56:27.315712    8324 start.go:563] Will wait 60s for crictl version
	I0917 16:56:27.315828    8324 ssh_runner.go:195] Run: which crictl
	I0917 16:56:27.319364    8324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 16:56:27.361507    8324 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 16:56:27.361630    8324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 16:56:27.388099    8324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 16:56:27.413051    8324 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 16:56:27.413195    8324 cli_runner.go:164] Run: docker network inspect addons-731605 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 16:56:27.429040    8324 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 16:56:27.432667    8324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:56:27.443400    8324 kubeadm.go:883] updating cluster {Name:addons-731605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-731605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 16:56:27.443531    8324 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:56:27.443597    8324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 16:56:27.461547    8324 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 16:56:27.461568    8324 docker.go:615] Images already preloaded, skipping extraction
	I0917 16:56:27.461654    8324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 16:56:27.479877    8324 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 16:56:27.479905    8324 cache_images.go:84] Images are preloaded, skipping loading
	I0917 16:56:27.479915    8324 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0917 16:56:27.480070    8324 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-731605 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-731605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 16:56:27.480164    8324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 16:56:27.527379    8324 cni.go:84] Creating CNI manager for ""
	I0917 16:56:27.527406    8324 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:56:27.527417    8324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 16:56:27.527436    8324 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-731605 NodeName:addons-731605 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 16:56:27.527591    8324 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-731605"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 16:56:27.527656    8324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 16:56:27.536671    8324 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 16:56:27.536762    8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 16:56:27.545355    8324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 16:56:27.563173    8324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 16:56:27.581197    8324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0917 16:56:27.599295    8324 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 16:56:27.602762    8324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:56:27.617026    8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:27.724012    8324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:56:27.739858    8324 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605 for IP: 192.168.49.2
	I0917 16:56:27.739894    8324 certs.go:194] generating shared ca certs ...
	I0917 16:56:27.739911    8324 certs.go:226] acquiring lock for ca certs: {Name:mk4233cd6d22b902eb1a88fa3630e0f93cf4a1c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:27.740052    8324 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-2253/.minikube/ca.key
	I0917 16:56:28.064797    8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-2253/.minikube/ca.crt ...
	I0917 16:56:28.064835    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/ca.crt: {Name:mkbd95fc9c74a7f92bfad573aaef04d265ffc139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:28.065046    8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-2253/.minikube/ca.key ...
	I0917 16:56:28.065062    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/ca.key: {Name:mkc14a3e90bb0aeeb1c8d549d47c375b4aa84049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:28.065144    8324 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.key
	I0917 16:56:28.458745    8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.crt ...
	I0917 16:56:28.458776    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.crt: {Name:mkf3c2cc5824ee644132fc4e707eed238ff55f9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:28.458964    8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.key ...
	I0917 16:56:28.458977    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.key: {Name:mke83ecaf52839c5ac8737034844966e5e358406 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:28.459061    8324 certs.go:256] generating profile certs ...
	I0917 16:56:28.459127    8324 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.key
	I0917 16:56:28.459143    8324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt with IP's: []
	I0917 16:56:28.968623    8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt ...
	I0917 16:56:28.968654    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: {Name:mk46c9961b7a69dcf4244920ae9b53f74531c8f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:28.968845    8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.key ...
	I0917 16:56:28.968858    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.key: {Name:mkabebac27c97b452bd7d9ef33854e678b183637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:28.968935    8324 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key.868db4b3
	I0917 16:56:28.968957    8324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt.868db4b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0917 16:56:29.654182    8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt.868db4b3 ...
	I0917 16:56:29.654217    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt.868db4b3: {Name:mkfaaf4f5785d6a022ab25176501f2976c697923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:29.654404    8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key.868db4b3 ...
	I0917 16:56:29.654419    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key.868db4b3: {Name:mkb5190f64ff492ef601549075a5272edd628524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:29.654504    8324 certs.go:381] copying /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt.868db4b3 -> /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt
	I0917 16:56:29.654584    8324 certs.go:385] copying /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key.868db4b3 -> /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key
	I0917 16:56:29.654642    8324 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.key
	I0917 16:56:29.654662    8324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.crt with IP's: []
	I0917 16:56:30.128281    8324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.crt ...
	I0917 16:56:30.128319    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.crt: {Name:mkfa931092eb47812407266ea7eeb67a77f37b9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:30.128510    8324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.key ...
	I0917 16:56:30.128520    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.key: {Name:mkf41edf2a4e0e824181ad6770270ae692211c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:30.128704    8324 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 16:56:30.128740    8324 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/ca.pem (1078 bytes)
	I0917 16:56:30.128772    8324 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/cert.pem (1123 bytes)
	I0917 16:56:30.128832    8324 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-2253/.minikube/certs/key.pem (1679 bytes)
	I0917 16:56:30.129535    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 16:56:30.165849    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 16:56:30.202717    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 16:56:30.231662    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 16:56:30.258967    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 16:56:30.285873    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 16:56:30.311959    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 16:56:30.341571    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 16:56:30.367547    8324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-2253/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 16:56:30.392209    8324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 16:56:30.411818    8324 ssh_runner.go:195] Run: openssl version
	I0917 16:56:30.417421    8324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 16:56:30.427791    8324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:30.431305    8324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:30.431372    8324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:56:30.438568    8324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 16:56:30.447706    8324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 16:56:30.450942    8324 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 16:56:30.450991    8324 kubeadm.go:392] StartCluster: {Name:addons-731605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-731605 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:56:30.451131    8324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 16:56:30.468436    8324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 16:56:30.477437    8324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 16:56:30.486716    8324 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 16:56:30.486797    8324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 16:56:30.495714    8324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 16:56:30.495734    8324 kubeadm.go:157] found existing configuration files:
	
	I0917 16:56:30.495789    8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 16:56:30.505108    8324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 16:56:30.505173    8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 16:56:30.513675    8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 16:56:30.522614    8324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 16:56:30.522696    8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 16:56:30.531834    8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 16:56:30.540930    8324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 16:56:30.541028    8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 16:56:30.549645    8324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 16:56:30.558681    8324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 16:56:30.558770    8324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 16:56:30.567442    8324 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 16:56:30.605214    8324 kubeadm.go:310] W0917 16:56:30.604506    1815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:30.606423    8324 kubeadm.go:310] W0917 16:56:30.605815    1815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:56:30.631753    8324 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0917 16:56:30.691902    8324 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 16:56:46.616572    8324 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 16:56:46.616632    8324 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 16:56:46.616727    8324 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 16:56:46.616824    8324 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0917 16:56:46.616874    8324 kubeadm.go:310] OS: Linux
	I0917 16:56:46.616930    8324 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 16:56:46.616985    8324 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0917 16:56:46.617033    8324 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 16:56:46.617080    8324 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 16:56:46.617129    8324 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 16:56:46.617179    8324 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 16:56:46.617224    8324 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 16:56:46.617285    8324 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 16:56:46.617332    8324 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0917 16:56:46.617409    8324 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 16:56:46.617523    8324 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 16:56:46.617613    8324 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 16:56:46.617716    8324 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 16:56:46.619748    8324 out.go:235]   - Generating certificates and keys ...
	I0917 16:56:46.619851    8324 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 16:56:46.619950    8324 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 16:56:46.620054    8324 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 16:56:46.620136    8324 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 16:56:46.620212    8324 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 16:56:46.620271    8324 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 16:56:46.620331    8324 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 16:56:46.620451    8324 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-731605 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 16:56:46.620507    8324 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 16:56:46.620640    8324 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-731605 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 16:56:46.620710    8324 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 16:56:46.620778    8324 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 16:56:46.620826    8324 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 16:56:46.620884    8324 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 16:56:46.620939    8324 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 16:56:46.620998    8324 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 16:56:46.621060    8324 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 16:56:46.621127    8324 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 16:56:46.621185    8324 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 16:56:46.621268    8324 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 16:56:46.621337    8324 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 16:56:46.623364    8324 out.go:235]   - Booting up control plane ...
	I0917 16:56:46.623521    8324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 16:56:46.623613    8324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 16:56:46.623754    8324 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 16:56:46.623893    8324 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 16:56:46.624002    8324 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 16:56:46.624073    8324 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 16:56:46.624245    8324 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 16:56:46.624373    8324 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 16:56:46.624446    8324 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000730106s
	I0917 16:56:46.624538    8324 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 16:56:46.624631    8324 kubeadm.go:310] [api-check] The API server is healthy after 6.014822155s
	I0917 16:56:46.624786    8324 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 16:56:46.624942    8324 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 16:56:46.625005    8324 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 16:56:46.625194    8324 kubeadm.go:310] [mark-control-plane] Marking the node addons-731605 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 16:56:46.625253    8324 kubeadm.go:310] [bootstrap-token] Using token: mh7oco.y3y87sddnrom4oau
	I0917 16:56:46.627235    8324 out.go:235]   - Configuring RBAC rules ...
	I0917 16:56:46.627423    8324 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 16:56:46.627552    8324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 16:56:46.627764    8324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 16:56:46.627953    8324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 16:56:46.628112    8324 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 16:56:46.628218    8324 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 16:56:46.628345    8324 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 16:56:46.628396    8324 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 16:56:46.628452    8324 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 16:56:46.628464    8324 kubeadm.go:310] 
	I0917 16:56:46.628528    8324 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 16:56:46.628539    8324 kubeadm.go:310] 
	I0917 16:56:46.628621    8324 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 16:56:46.628632    8324 kubeadm.go:310] 
	I0917 16:56:46.628659    8324 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 16:56:46.628729    8324 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 16:56:46.628790    8324 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 16:56:46.628800    8324 kubeadm.go:310] 
	I0917 16:56:46.628857    8324 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 16:56:46.628867    8324 kubeadm.go:310] 
	I0917 16:56:46.628918    8324 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 16:56:46.628929    8324 kubeadm.go:310] 
	I0917 16:56:46.628985    8324 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 16:56:46.629070    8324 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 16:56:46.629145    8324 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 16:56:46.629152    8324 kubeadm.go:310] 
	I0917 16:56:46.629241    8324 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 16:56:46.629325    8324 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 16:56:46.629333    8324 kubeadm.go:310] 
	I0917 16:56:46.629422    8324 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mh7oco.y3y87sddnrom4oau \
	I0917 16:56:46.629534    8324 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cc495b07a00a58e640dc79cf1f74d56bfb00f4839c2d5eb8e4adc88dc1953060 \
	I0917 16:56:46.629559    8324 kubeadm.go:310] 	--control-plane 
	I0917 16:56:46.629567    8324 kubeadm.go:310] 
	I0917 16:56:46.629657    8324 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 16:56:46.629669    8324 kubeadm.go:310] 
	I0917 16:56:46.629756    8324 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mh7oco.y3y87sddnrom4oau \
	I0917 16:56:46.629880    8324 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cc495b07a00a58e640dc79cf1f74d56bfb00f4839c2d5eb8e4adc88dc1953060 
	I0917 16:56:46.629896    8324 cni.go:84] Creating CNI manager for ""
	I0917 16:56:46.629953    8324 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:56:46.631966    8324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 16:56:46.634041    8324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 16:56:46.642808    8324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 16:56:46.661548    8324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 16:56:46.661638    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:46.661731    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-731605 minikube.k8s.io/updated_at=2024_09_17T16_56_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=addons-731605 minikube.k8s.io/primary=true
	I0917 16:56:46.798498    8324 ops.go:34] apiserver oom_adj: -16
	I0917 16:56:46.798632    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:47.298736    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:47.799332    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:48.299527    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:48.799176    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:49.299672    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:49.799482    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:50.298728    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:50.799484    8324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:56:50.904836    8324 kubeadm.go:1113] duration metric: took 4.24326739s to wait for elevateKubeSystemPrivileges
	I0917 16:56:50.904872    8324 kubeadm.go:394] duration metric: took 20.453884373s to StartCluster
	I0917 16:56:50.904889    8324 settings.go:142] acquiring lock: {Name:mkdb6771861a9971ad02b34bc008b515d936ba60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:50.905019    8324 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-2253/kubeconfig
	I0917 16:56:50.905478    8324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/kubeconfig: {Name:mk7c603d8d76f3ca0de80c5b79069197b0c670fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:50.905678    8324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 16:56:50.905699    8324 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 16:56:50.906093    8324 config.go:182] Loaded profile config "addons-731605": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 16:56:50.906149    8324 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 16:56:50.906278    8324 addons.go:69] Setting yakd=true in profile "addons-731605"
	I0917 16:56:50.906301    8324 addons.go:234] Setting addon yakd=true in "addons-731605"
	I0917 16:56:50.906327    8324 addons.go:69] Setting inspektor-gadget=true in profile "addons-731605"
	I0917 16:56:50.906354    8324 addons.go:69] Setting metrics-server=true in profile "addons-731605"
	I0917 16:56:50.906372    8324 addons.go:234] Setting addon metrics-server=true in "addons-731605"
	I0917 16:56:50.906391    8324 addons.go:69] Setting cloud-spanner=true in profile "addons-731605"
	I0917 16:56:50.906420    8324 addons.go:234] Setting addon cloud-spanner=true in "addons-731605"
	I0917 16:56:50.906447    8324 addons.go:69] Setting storage-provisioner=true in profile "addons-731605"
	I0917 16:56:50.906484    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:50.906508    8324 addons.go:234] Setting addon storage-provisioner=true in "addons-731605"
	I0917 16:56:50.906543    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:50.907048    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:50.907080    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:50.906414    8324 addons.go:69] Setting registry=true in profile "addons-731605"
	I0917 16:56:50.907535    8324 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-731605"
	I0917 16:56:50.907542    8324 addons.go:234] Setting addon registry=true in "addons-731605"
	I0917 16:56:50.907552    8324 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-731605"
	I0917 16:56:50.907570    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:50.907641    8324 addons.go:69] Setting volcano=true in profile "addons-731605"
	I0917 16:56:50.907651    8324 addons.go:234] Setting addon volcano=true in "addons-731605"
	I0917 16:56:50.907715    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:50.908076    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:50.908177    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:50.910941    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:50.912999    8324 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-731605"
	I0917 16:56:50.913110    8324 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-731605"
	I0917 16:56:50.913161    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:50.913769    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:50.918010    8324 addons.go:69] Setting default-storageclass=true in profile "addons-731605"
	I0917 16:56:50.918052    8324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-731605"
	I0917 16:56:50.918387    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:50.921298    8324 addons.go:69] Setting volumesnapshots=true in profile "addons-731605"
	I0917 16:56:50.921386    8324 addons.go:234] Setting addon volumesnapshots=true in "addons-731605"
	I0917 16:56:50.921462    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:50.922046    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:50.945721    8324 addons.go:69] Setting gcp-auth=true in profile "addons-731605"
	I0917 16:56:50.945814    8324 mustload.go:65] Loading cluster: addons-731605
	I0917 16:56:50.946056    8324 config.go:182] Loaded profile config "addons-731605": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 16:56:50.946401    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:50.971592    8324 addons.go:69] Setting ingress=true in profile "addons-731605"
	I0917 16:56:50.971668    8324 addons.go:234] Setting addon ingress=true in "addons-731605"
	I0917 16:56:50.971754    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:50.972286    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:50.972618    8324 out.go:177] * Verifying Kubernetes components...
	I0917 16:56:50.974779    8324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:56:51.003216    8324 addons.go:69] Setting ingress-dns=true in profile "addons-731605"
	I0917 16:56:51.003295    8324 addons.go:234] Setting addon ingress-dns=true in "addons-731605"
	I0917 16:56:51.003365    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:51.004003    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:50.906356    8324 addons.go:234] Setting addon inspektor-gadget=true in "addons-731605"
	I0917 16:56:51.044289    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:51.044867    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:51.048026    8324 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 16:56:51.053808    8324 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 16:56:51.058819    8324 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 16:56:51.058885    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 16:56:51.058986    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:50.906331    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:51.061808    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:51.075588    8324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 16:56:50.906407    8324 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-731605"
	I0917 16:56:51.085516    8324 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-731605"
	I0917 16:56:51.085595    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:51.086120    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:51.089417    8324 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:51.089518    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 16:56:51.089622    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:50.906399    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:51.103641    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:51.106305    8324 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 16:56:51.108417    8324 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:51.108488    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 16:56:51.108594    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.138837    8324 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-731605"
	I0917 16:56:51.138943    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:51.139466    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:51.167772    8324 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 16:56:51.176333    8324 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0917 16:56:51.180193    8324 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 16:56:51.211198    8324 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 16:56:51.215933    8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 16:56:51.215958    8324 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 16:56:51.216029    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.222934    8324 addons.go:234] Setting addon default-storageclass=true in "addons-731605"
	I0917 16:56:51.222975    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:51.223386    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:51.242475    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:51.246714    8324 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 16:56:51.250233    8324 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:51.252273    8324 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:56:51.254974    8324 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:56:51.254998    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 16:56:51.255065    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.258109    8324 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 16:56:51.261209    8324 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 16:56:51.266033    8324 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 16:56:51.268897    8324 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 16:56:51.270734    8324 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 16:56:51.272973    8324 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 16:56:51.273992    8324 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0917 16:56:51.327986    8324 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 16:56:51.328006    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 16:56:51.328079    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.328341    8324 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 16:56:51.328692    8324 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 16:56:51.331680    8324 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 16:56:51.331724    8324 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 16:56:51.331793    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.348747    8324 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0917 16:56:51.353583    8324 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 16:56:51.353612    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0917 16:56:51.353683    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.388185    8324 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:56:51.388205    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 16:56:51.388269    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.392955    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.395528    8324 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 16:56:51.395646    8324 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 16:56:51.397682    8324 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 16:56:51.397705    8324 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 16:56:51.397771    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.410024    8324 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 16:56:51.410052    8324 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 16:56:51.410137    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.431031    8324 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 16:56:51.431280    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.435062    8324 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:51.435084    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 16:56:51.435164    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.447301    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.465076    8324 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 16:56:51.467630    8324 out.go:177]   - Using image docker.io/busybox:stable
	I0917 16:56:51.470966    8324 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:51.470990    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 16:56:51.471059    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.477702    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.542726    8324 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:51.542748    8324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 16:56:51.542809    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:51.561051    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.605182    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.613199    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.629904    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.634383    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.635066    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.654236    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.660266    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	W0917 16:56:51.670675    8324 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 16:56:51.670703    8324 retry.go:31] will retry after 267.773389ms: ssh: handshake failed: EOF
	I0917 16:56:51.681036    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:51.683405    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:52.079113    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:56:52.240902    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:56:52.258782    8324 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 16:56:52.258855    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 16:56:52.268085    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 16:56:52.357781    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 16:56:52.384265    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:56:52.545881    8324 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 16:56:52.545952    8324 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 16:56:52.617430    8324 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 16:56:52.617458    8324 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 16:56:52.644641    8324 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 16:56:52.644683    8324 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 16:56:52.668617    8324 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.693773142s)
	I0917 16:56:52.668799    8324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:56:52.668714    8324 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.763004076s)
	I0917 16:56:52.669055    8324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 16:56:52.763046    8324 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 16:56:52.763123    8324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 16:56:52.836567    8324 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 16:56:52.836646    8324 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 16:56:52.840236    8324 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 16:56:52.840298    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 16:56:52.916848    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 16:56:52.937387    8324 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 16:56:52.937463    8324 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 16:56:52.982524    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:56:53.001928    8324 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:53.001990    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 16:56:53.006755    8324 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 16:56:53.006833    8324 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 16:56:53.010099    8324 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 16:56:53.010170    8324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 16:56:53.054800    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:56:53.090943    8324 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:53.091025    8324 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 16:56:53.110065    8324 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 16:56:53.110141    8324 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 16:56:53.112705    8324 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 16:56:53.112784    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 16:56:53.209040    8324 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 16:56:53.209119    8324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 16:56:53.256034    8324 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 16:56:53.256057    8324 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 16:56:53.292404    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:56:53.307753    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:56:53.390058    8324 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 16:56:53.390085    8324 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 16:56:53.405166    8324 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 16:56:53.405192    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 16:56:53.413149    8324 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 16:56:53.413172    8324 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 16:56:53.531658    8324 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:53.531698    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 16:56:53.688577    8324 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:53.688604    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 16:56:53.766753    8324 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 16:56:53.766781    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 16:56:53.784071    8324 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 16:56:53.784098    8324 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 16:56:53.887115    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:56:53.950740    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:56:54.024412    8324 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 16:56:54.024441    8324 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 16:56:54.107322    8324 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 16:56:54.107349    8324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 16:56:54.198616    8324 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:54.198644    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 16:56:54.613917    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:56:54.669441    8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 16:56:54.669467    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 16:56:55.459932    8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 16:56:55.459958    8324 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 16:56:55.699778    8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 16:56:55.699804    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 16:56:56.105133    8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 16:56:56.105159    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 16:56:56.615228    8324 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:56.615259    8324 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 16:56:56.839051    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:56:58.255235    8324 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 16:56:58.255328    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:58.297703    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:56:59.314136    8324 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 16:56:59.336468    8324 addons.go:234] Setting addon gcp-auth=true in "addons-731605"
	I0917 16:56:59.336519    8324 host.go:66] Checking if "addons-731605" exists ...
	I0917 16:56:59.336978    8324 cli_runner.go:164] Run: docker container inspect addons-731605 --format={{.State.Status}}
	I0917 16:56:59.361637    8324 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 16:56:59.361690    8324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-731605
	I0917 16:56:59.386667    8324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/addons-731605/id_rsa Username:docker}
	I0917 16:57:01.893348    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.814197517s)
	I0917 16:57:01.893387    8324 addons.go:475] Verifying addon ingress=true in "addons-731605"
	I0917 16:57:01.893647    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.652644363s)
	I0917 16:57:01.893756    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.625601437s)
	I0917 16:57:01.895591    8324 out.go:177] * Verifying ingress addon...
	I0917 16:57:01.899069    8324 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 16:57:01.905859    8324 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 16:57:01.905890    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:02.426649    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:02.911118    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:03.462457    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:03.492416    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.134602719s)
	I0917 16:57:03.492472    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.1081721s)
	I0917 16:57:03.492512    8324 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (10.823437379s)
	I0917 16:57:03.492578    8324 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 16:57:03.492598    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.510006485s)
	I0917 16:57:03.492819    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.437945116s)
	I0917 16:57:03.492867    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.20043852s)
	I0917 16:57:03.492879    8324 addons.go:475] Verifying addon registry=true in "addons-731605"
	I0917 16:57:03.493420    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.185630011s)
	I0917 16:57:03.493443    8324 addons.go:475] Verifying addon metrics-server=true in "addons-731605"
	I0917 16:57:03.493491    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.606348935s)
	I0917 16:57:03.493814    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.543039719s)
	W0917 16:57:03.493854    8324 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:57:03.493876    8324 retry.go:31] will retry after 363.451603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 16:57:03.493992    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.88004417s)
	I0917 16:57:03.492524    8324 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.823709777s)
	I0917 16:57:03.492559    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.575645446s)
	I0917 16:57:03.494961    8324 node_ready.go:35] waiting up to 6m0s for node "addons-731605" to be "Ready" ...
	I0917 16:57:03.496341    8324 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-731605 service yakd-dashboard -n yakd-dashboard
	
	I0917 16:57:03.496344    8324 out.go:177] * Verifying registry addon...
	I0917 16:57:03.499415    8324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 16:57:03.527640    8324 node_ready.go:49] node "addons-731605" has status "Ready":"True"
	I0917 16:57:03.527665    8324 node_ready.go:38] duration metric: took 32.640905ms for node "addons-731605" to be "Ready" ...
	I0917 16:57:03.527678    8324 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	W0917 16:57:03.594562    8324 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0917 16:57:03.610893    8324 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 16:57:03.610965    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:03.631757    8324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:03.857537    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:57:03.991542    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:04.011840    8324 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-731605" context rescaled to 1 replicas
	I0917 16:57:04.064205    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:04.406074    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:04.508819    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:04.819573    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.980428015s)
	I0917 16:57:04.819608    8324 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-731605"
	I0917 16:57:04.819834    8324 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.45817447s)
	I0917 16:57:04.823245    8324 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 16:57:04.823378    8324 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 16:57:04.826093    8324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 16:57:04.828510    8324 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:57:04.830490    8324 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 16:57:04.830553    8324 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 16:57:04.848154    8324 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 16:57:04.848233    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:04.903828    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:04.962663    8324 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 16:57:04.962749    8324 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 16:57:05.004171    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:05.051053    8324 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:57:05.051097    8324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 16:57:05.087604    8324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 16:57:05.332581    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.406836    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:05.504040    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:05.639121    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:05.831625    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:05.904453    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:06.003149    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:06.333043    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.403439    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:06.435668    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.578034313s)
	I0917 16:57:06.503833    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:06.678189    8324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.59054311s)
	I0917 16:57:06.681223    8324 addons.go:475] Verifying addon gcp-auth=true in "addons-731605"
	I0917 16:57:06.683431    8324 out.go:177] * Verifying gcp-auth addon...
	I0917 16:57:06.686643    8324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 16:57:06.692684    8324 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 16:57:06.835655    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:06.903854    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:07.003946    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:07.330833    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.403484    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:07.503009    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:07.832267    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:07.932124    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:08.003559    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:08.138766    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:08.331273    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.404265    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:08.503642    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:08.831878    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:08.904908    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:09.004550    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:09.331048    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.405193    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:09.503349    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:09.831748    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:09.932748    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:10.003925    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:10.139053    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:10.330602    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.404645    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:10.503372    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:10.831516    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:10.913229    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:11.005244    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:11.330596    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.403833    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:11.503600    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:11.831622    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:11.904223    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:12.003946    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:12.142964    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:12.331061    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.404309    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:12.503409    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:12.831172    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:12.905658    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:13.003706    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:13.330499    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.404285    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:13.503452    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:13.831879    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:13.904437    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:14.003160    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:14.331158    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.404062    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:14.503269    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:14.639587    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:14.831801    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:14.903570    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:15.004035    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:15.331295    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.404603    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:15.503408    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:15.831319    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:15.903935    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:16.003138    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:16.331341    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.405170    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:16.503924    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:16.830811    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:16.905135    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:17.004130    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:17.139152    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:17.330300    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.404453    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:17.504774    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:17.831280    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:17.904267    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:18.003944    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:18.331054    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.404269    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:18.504113    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:18.831380    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:18.903741    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:19.004167    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:19.330926    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:19.404566    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:19.503329    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:19.638850    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:19.832548    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:19.904502    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:20.013084    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:20.332621    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:20.405051    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:20.504818    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:20.830676    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:20.904151    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:21.004080    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:21.331457    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:21.403922    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:21.504322    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:21.831346    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:21.904149    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:22.003581    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:22.138687    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:22.332223    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:22.405834    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:22.503795    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:22.832336    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:22.909866    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:23.004676    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:23.330798    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.404038    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:23.503486    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:23.830792    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:23.903540    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:24.003480    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:24.331773    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:24.403438    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:24.503555    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:24.639174    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:24.831875    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:24.915377    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:25.004907    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:25.332160    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:25.405080    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:25.503618    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:25.831729    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:25.903996    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:26.005317    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:26.331789    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.432757    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:26.504651    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 16:57:26.831661    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:26.904677    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:27.003531    8324 kapi.go:107] duration metric: took 23.504113916s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 16:57:27.138981    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:27.331617    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:27.432460    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:27.831777    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:27.903614    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:28.332229    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:28.404742    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:28.832787    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:28.906577    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:29.140087    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:29.335332    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:29.406037    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:29.834160    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:29.905371    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:30.332728    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:30.405853    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:30.831585    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:30.904338    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:31.140981    8324 pod_ready.go:103] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:31.332266    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:31.404535    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:31.831278    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:31.903738    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:32.330847    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:32.404238    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:32.831312    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:32.905359    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:33.139101    8324 pod_ready.go:93] pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:33.139126    8324 pod_ready.go:82] duration metric: took 29.507290643s for pod "coredns-7c65d6cfc9-nfdb2" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.139139    8324 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tg5hf" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.143318    8324 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-tg5hf" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-tg5hf" not found
	I0917 16:57:33.143394    8324 pod_ready.go:82] duration metric: took 4.244543ms for pod "coredns-7c65d6cfc9-tg5hf" in "kube-system" namespace to be "Ready" ...
	E0917 16:57:33.143421    8324 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-tg5hf" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-tg5hf" not found
	I0917 16:57:33.143459    8324 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-731605" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.151817    8324 pod_ready.go:93] pod "etcd-addons-731605" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:33.151849    8324 pod_ready.go:82] duration metric: took 8.363435ms for pod "etcd-addons-731605" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.151866    8324 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-731605" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.157475    8324 pod_ready.go:93] pod "kube-apiserver-addons-731605" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:33.157506    8324 pod_ready.go:82] duration metric: took 5.629396ms for pod "kube-apiserver-addons-731605" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.157521    8324 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-731605" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.162758    8324 pod_ready.go:93] pod "kube-controller-manager-addons-731605" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:33.162784    8324 pod_ready.go:82] duration metric: took 5.251759ms for pod "kube-controller-manager-addons-731605" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.162800    8324 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dzqf4" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.331768    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:33.337262    8324 pod_ready.go:93] pod "kube-proxy-dzqf4" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:33.337289    8324 pod_ready.go:82] duration metric: took 174.482289ms for pod "kube-proxy-dzqf4" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.337304    8324 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-731605" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.403927    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:33.736912    8324 pod_ready.go:93] pod "kube-scheduler-addons-731605" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:33.736939    8324 pod_ready.go:82] duration metric: took 399.626526ms for pod "kube-scheduler-addons-731605" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.736951    8324 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zjjq7" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:33.832926    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:33.904859    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:34.331230    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:34.405234    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:34.834454    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:34.903760    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:35.332426    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:35.404235    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:35.744845    8324 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjjq7" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:35.832461    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:35.904403    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:36.331770    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:36.404612    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:36.832349    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:36.907147    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:37.336236    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:37.405349    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:37.831817    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:37.914078    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:38.245834    8324 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zjjq7" in "kube-system" namespace has status "Ready":"False"
	I0917 16:57:38.331611    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:38.420490    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:38.745107    8324 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zjjq7" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:38.745132    8324 pod_ready.go:82] duration metric: took 5.008172958s for pod "metrics-server-84c5f94fbc-zjjq7" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:38.745145    8324 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9bwdv" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:38.751921    8324 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9bwdv" in "kube-system" namespace has status "Ready":"True"
	I0917 16:57:38.751947    8324 pod_ready.go:82] duration metric: took 6.793092ms for pod "nvidia-device-plugin-daemonset-9bwdv" in "kube-system" namespace to be "Ready" ...
	I0917 16:57:38.751969    8324 pod_ready.go:39] duration metric: took 35.224225146s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:57:38.751987    8324 api_server.go:52] waiting for apiserver process to appear ...
	I0917 16:57:38.752053    8324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 16:57:38.771916    8324 api_server.go:72] duration metric: took 47.866175002s to wait for apiserver process to appear ...
	I0917 16:57:38.771943    8324 api_server.go:88] waiting for apiserver healthz status ...
	I0917 16:57:38.771965    8324 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 16:57:38.782402    8324 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 16:57:38.783801    8324 api_server.go:141] control plane version: v1.31.1
	I0917 16:57:38.783827    8324 api_server.go:131] duration metric: took 11.876298ms to wait for apiserver health ...
	I0917 16:57:38.783835    8324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 16:57:38.792686    8324 system_pods.go:59] 17 kube-system pods found
	I0917 16:57:38.792724    8324 system_pods.go:61] "coredns-7c65d6cfc9-nfdb2" [4a2ff10d-66fd-4411-aeee-a6fd0f092c93] Running
	I0917 16:57:38.792735    8324 system_pods.go:61] "csi-hostpath-attacher-0" [efb49be6-b3cb-46a6-ab37-9da589ebee49] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 16:57:38.792742    8324 system_pods.go:61] "csi-hostpath-resizer-0" [8e281caa-9272-4066-8edc-1969e947de38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 16:57:38.792751    8324 system_pods.go:61] "csi-hostpathplugin-kmvnn" [5856def8-de60-43e9-8c1b-df459e3126c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 16:57:38.792756    8324 system_pods.go:61] "etcd-addons-731605" [d6b238f1-7f72-40cf-b74e-ed79a0174ca8] Running
	I0917 16:57:38.792761    8324 system_pods.go:61] "kube-apiserver-addons-731605" [d768f6b6-3871-4d26-85f4-54ec30c15e51] Running
	I0917 16:57:38.792765    8324 system_pods.go:61] "kube-controller-manager-addons-731605" [9cc91090-ea89-4915-a417-1eb8e0859aba] Running
	I0917 16:57:38.792772    8324 system_pods.go:61] "kube-ingress-dns-minikube" [7e5b92cd-d0ba-4c9d-b0b6-9efdbb97c241] Running
	I0917 16:57:38.792775    8324 system_pods.go:61] "kube-proxy-dzqf4" [ea6f0a01-0aef-40a4-a999-4c7c9f47d4bb] Running
	I0917 16:57:38.792780    8324 system_pods.go:61] "kube-scheduler-addons-731605" [7f6cb6a4-4c17-4876-99bf-cd6c418c3854] Running
	I0917 16:57:38.792787    8324 system_pods.go:61] "metrics-server-84c5f94fbc-zjjq7" [5244e1a0-b041-4b8b-9a1a-97aa3d2df4f0] Running
	I0917 16:57:38.792793    8324 system_pods.go:61] "nvidia-device-plugin-daemonset-9bwdv" [611e2832-baef-4884-ac81-badda29286e4] Running
	I0917 16:57:38.792804    8324 system_pods.go:61] "registry-66c9cd494c-zt9dz" [e7f2fc50-5c03-4aec-9040-85d9963af8e6] Running
	I0917 16:57:38.792809    8324 system_pods.go:61] "registry-proxy-r92r6" [5d64f5cf-2b0e-40f7-88ca-5822f9941c5a] Running
	I0917 16:57:38.792813    8324 system_pods.go:61] "snapshot-controller-56fcc65765-s6zlr" [36495912-63bd-4bf0-840e-6d78e14c70b9] Running
	I0917 16:57:38.792817    8324 system_pods.go:61] "snapshot-controller-56fcc65765-vlpz9" [bdc32038-f926-486f-aae6-ed0f0ae51f25] Running
	I0917 16:57:38.792826    8324 system_pods.go:61] "storage-provisioner" [27e44c10-971a-4e5f-96f6-bbba4e427bd0] Running
	I0917 16:57:38.792833    8324 system_pods.go:74] duration metric: took 8.990908ms to wait for pod list to return data ...
	I0917 16:57:38.792847    8324 default_sa.go:34] waiting for default service account to be created ...
	I0917 16:57:38.831343    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:38.903961    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:38.935808    8324 default_sa.go:45] found service account: "default"
	I0917 16:57:38.935838    8324 default_sa.go:55] duration metric: took 142.984297ms for default service account to be created ...
	I0917 16:57:38.935849    8324 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 16:57:39.144047    8324 system_pods.go:86] 17 kube-system pods found
	I0917 16:57:39.144092    8324 system_pods.go:89] "coredns-7c65d6cfc9-nfdb2" [4a2ff10d-66fd-4411-aeee-a6fd0f092c93] Running
	I0917 16:57:39.144105    8324 system_pods.go:89] "csi-hostpath-attacher-0" [efb49be6-b3cb-46a6-ab37-9da589ebee49] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 16:57:39.144112    8324 system_pods.go:89] "csi-hostpath-resizer-0" [8e281caa-9272-4066-8edc-1969e947de38] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 16:57:39.144122    8324 system_pods.go:89] "csi-hostpathplugin-kmvnn" [5856def8-de60-43e9-8c1b-df459e3126c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 16:57:39.144141    8324 system_pods.go:89] "etcd-addons-731605" [d6b238f1-7f72-40cf-b74e-ed79a0174ca8] Running
	I0917 16:57:39.144147    8324 system_pods.go:89] "kube-apiserver-addons-731605" [d768f6b6-3871-4d26-85f4-54ec30c15e51] Running
	I0917 16:57:39.144152    8324 system_pods.go:89] "kube-controller-manager-addons-731605" [9cc91090-ea89-4915-a417-1eb8e0859aba] Running
	I0917 16:57:39.144166    8324 system_pods.go:89] "kube-ingress-dns-minikube" [7e5b92cd-d0ba-4c9d-b0b6-9efdbb97c241] Running
	I0917 16:57:39.144171    8324 system_pods.go:89] "kube-proxy-dzqf4" [ea6f0a01-0aef-40a4-a999-4c7c9f47d4bb] Running
	I0917 16:57:39.144178    8324 system_pods.go:89] "kube-scheduler-addons-731605" [7f6cb6a4-4c17-4876-99bf-cd6c418c3854] Running
	I0917 16:57:39.144183    8324 system_pods.go:89] "metrics-server-84c5f94fbc-zjjq7" [5244e1a0-b041-4b8b-9a1a-97aa3d2df4f0] Running
	I0917 16:57:39.144187    8324 system_pods.go:89] "nvidia-device-plugin-daemonset-9bwdv" [611e2832-baef-4884-ac81-badda29286e4] Running
	I0917 16:57:39.144201    8324 system_pods.go:89] "registry-66c9cd494c-zt9dz" [e7f2fc50-5c03-4aec-9040-85d9963af8e6] Running
	I0917 16:57:39.144205    8324 system_pods.go:89] "registry-proxy-r92r6" [5d64f5cf-2b0e-40f7-88ca-5822f9941c5a] Running
	I0917 16:57:39.144212    8324 system_pods.go:89] "snapshot-controller-56fcc65765-s6zlr" [36495912-63bd-4bf0-840e-6d78e14c70b9] Running
	I0917 16:57:39.144222    8324 system_pods.go:89] "snapshot-controller-56fcc65765-vlpz9" [bdc32038-f926-486f-aae6-ed0f0ae51f25] Running
	I0917 16:57:39.144226    8324 system_pods.go:89] "storage-provisioner" [27e44c10-971a-4e5f-96f6-bbba4e427bd0] Running
	I0917 16:57:39.144237    8324 system_pods.go:126] duration metric: took 208.381251ms to wait for k8s-apps to be running ...
	I0917 16:57:39.144247    8324 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 16:57:39.144305    8324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 16:57:39.158522    8324 system_svc.go:56] duration metric: took 14.251023ms WaitForService to wait for kubelet
	I0917 16:57:39.158557    8324 kubeadm.go:582] duration metric: took 48.252828761s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:57:39.158583    8324 node_conditions.go:102] verifying NodePressure condition ...
	I0917 16:57:39.331813    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:39.336531    8324 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0917 16:57:39.336606    8324 node_conditions.go:123] node cpu capacity is 2
	I0917 16:57:39.336634    8324 node_conditions.go:105] duration metric: took 178.036695ms to run NodePressure ...
	I0917 16:57:39.336661    8324 start.go:241] waiting for startup goroutines ...
	I0917 16:57:39.404435    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:39.832184    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:39.904906    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:40.331453    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:40.404239    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:40.832283    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:40.904301    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:41.332809    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.403916    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:41.832157    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:41.905081    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:42.331312    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:42.405046    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:42.831741    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:42.905520    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:43.333109    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:43.405342    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:43.831133    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:43.903519    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:44.331175    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:44.404468    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:44.830972    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:44.931737    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:45.333553    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:45.405009    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:45.832344    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:45.905192    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:46.332140    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:46.403946    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:46.831697    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:46.903906    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:47.332107    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:47.404271    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:47.831079    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:47.903448    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:48.330732    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:48.403156    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:48.838937    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:48.938002    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:49.334638    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:49.403443    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:49.833247    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:49.904822    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:50.332835    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:50.404153    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:50.837532    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:50.905372    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:51.340582    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:51.405144    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:51.832217    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:51.932872    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:52.331413    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:52.403799    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:52.831388    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:52.904759    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:53.331115    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:53.403588    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:53.831959    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:53.904206    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:54.332420    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:54.404517    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:54.832217    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:54.904255    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:55.331030    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:55.403233    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:55.830862    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:55.903911    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:56.331733    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:56.405002    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:56.833035    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:56.903472    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:57.331019    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:57.404955    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:57.831364    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:57.903919    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:58.332462    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:58.431670    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:58.830600    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 16:57:58.903880    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:59.331245    8324 kapi.go:107] duration metric: took 54.505147526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 16:57:59.403456    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:57:59.904380    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:00.412664    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:00.903817    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:01.404315    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:01.904188    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:02.403250    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:02.902983    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:03.404321    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:03.903875    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:04.403630    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:04.904251    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:05.403855    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:05.904021    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:06.404086    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:06.903383    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:07.404517    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:07.905174    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:08.403284    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:08.904031    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:09.403764    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:09.904355    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:10.403462    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:10.904607    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:11.404648    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:11.904493    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:12.404097    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:12.904053    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:13.404535    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:13.903843    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:14.403634    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:14.904135    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:15.403103    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:15.903602    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:16.403291    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:16.904114    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:17.405937    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:17.906884    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:18.404512    8324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 16:58:18.903545    8324 kapi.go:107] duration metric: took 1m17.004488036s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 16:58:30.197235    8324 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 16:58:30.197274    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:30.690605    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:31.190424    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:31.691421    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:32.190116    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:32.689779    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:33.190755    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:33.690906    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:34.190205    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:34.689784    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:35.191247    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:35.690345    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:36.191269    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:36.691247    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:37.190969    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:37.690864    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:38.190997    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:38.690862    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:39.191309    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:39.690402    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:40.190258    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:40.690113    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:41.190547    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:41.690094    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:42.190294    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:42.690794    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:43.190616    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:43.690491    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:44.196248    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:44.690669    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:45.192453    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:45.691393    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:46.190484    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:46.690058    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:47.191206    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:47.689765    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:48.194615    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:48.690143    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:49.190811    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:49.690611    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:50.190658    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:50.690105    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:51.190403    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:51.689773    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:52.189914    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:52.690304    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:53.191103    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:53.689895    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:54.191102    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:54.691121    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:55.192082    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:55.690389    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:56.190180    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:56.690676    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:57.190947    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:57.690992    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:58.190937    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:58.690534    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:59.191386    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:58:59.691178    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:00.215019    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:00.690090    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:01.191332    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:01.690492    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:02.190380    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:02.690819    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:03.190558    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:03.691507    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:04.190791    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:04.689969    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:05.190971    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:05.690234    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:06.190565    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:06.692131    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:07.190435    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:07.690918    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:08.190327    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:08.690023    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:09.190784    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:09.690547    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:10.190780    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:10.691366    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:11.190952    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:11.690883    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:12.191751    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:12.690045    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:13.191429    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:13.690262    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:14.189758    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:14.689980    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:15.191258    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:15.689798    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:16.189933    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:16.689984    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:17.190804    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:17.690483    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:18.190333    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:18.690662    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:19.190938    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:19.690351    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:20.191004    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:20.689694    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:21.190297    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:21.689832    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:22.190657    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:22.690268    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:23.190673    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:23.690301    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:24.189727    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:24.690272    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:25.190354    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:25.689887    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:26.190038    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:26.689762    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:27.190950    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:27.690088    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:28.191148    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:28.689879    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:29.190913    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:29.690702    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:30.191364    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:30.691010    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:31.191277    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:31.690258    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:32.190946    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:32.690744    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:33.190888    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:33.690302    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:34.189942    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:34.690763    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:35.190318    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:35.689745    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:36.190625    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:36.690961    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:37.191540    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:37.692441    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:38.191477    8324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 16:59:38.690584    8324 kapi.go:107] duration metric: took 2m32.003940051s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 16:59:38.692857    8324 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-731605 cluster.
	I0917 16:59:38.695271    8324 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 16:59:38.697257    8324 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 16:59:38.699177    8324 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, volcano, ingress-dns, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0917 16:59:38.700969    8324 addons.go:510] duration metric: took 2m47.794830815s for enable addons: enabled=[storage-provisioner cloud-spanner volcano ingress-dns nvidia-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0917 16:59:38.701019    8324 start.go:246] waiting for cluster config update ...
	I0917 16:59:38.701044    8324 start.go:255] writing updated cluster config ...
	I0917 16:59:38.701337    8324 ssh_runner.go:195] Run: rm -f paused
	I0917 16:59:39.046149    8324 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 16:59:39.048927    8324 out.go:177] * Done! kubectl is now configured to use "addons-731605" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 17:09:21 addons-731605 dockerd[1280]: time="2024-09-17T17:09:21.100651994Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 17:09:21 addons-731605 dockerd[1280]: time="2024-09-17T17:09:21.104101291Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 17:09:24 addons-731605 dockerd[1280]: time="2024-09-17T17:09:24.897283978Z" level=info msg="ignoring event" container=a4b44604909714a55bcb9cd03abad1ede30788f26874f92fc5dc45569594cd88 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:24 addons-731605 dockerd[1280]: time="2024-09-17T17:09:24.900961338Z" level=info msg="ignoring event" container=980659fd5d7b9b3905aafc2c40388060c608878ee6fbe258948d9a5dc774b1ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:25 addons-731605 dockerd[1280]: time="2024-09-17T17:09:25.098157593Z" level=info msg="ignoring event" container=0391ae36109d61c77a7ebd2f1bf62fcd8d259445ca68ab9780f3a6ff63a2f7fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:25 addons-731605 dockerd[1280]: time="2024-09-17T17:09:25.108543047Z" level=info msg="ignoring event" container=961d91f7e48500c751df5596a0500d5d6aeba1f17ae36f45acb66a5fa40a3fcf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:30 addons-731605 dockerd[1280]: time="2024-09-17T17:09:30.634067643Z" level=info msg="ignoring event" container=0a2ebfe99cfa4f17b07ae8f64336c067a2fd5737c07a1a03ee143d119e5ed627 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:30 addons-731605 dockerd[1280]: time="2024-09-17T17:09:30.800600325Z" level=info msg="ignoring event" container=cb21244199262c49750eceddd86887f3d077d1202e7d03bdde74059829f826ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:31 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bcda20f4f0f2000c7fbf8a8882242b723aff37eb1f35a52f6946607402fdb17e/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 17 17:09:31 addons-731605 dockerd[1280]: time="2024-09-17T17:09:31.765551811Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 17 17:09:32 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:32Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 17 17:09:32 addons-731605 dockerd[1280]: time="2024-09-17T17:09:32.455527382Z" level=info msg="ignoring event" container=777341523cd638e49172c9a3c59f5b2d4d5325a258b786907cdd8986b37780ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:34 addons-731605 dockerd[1280]: time="2024-09-17T17:09:34.618644738Z" level=info msg="ignoring event" container=bcda20f4f0f2000c7fbf8a8882242b723aff37eb1f35a52f6946607402fdb17e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:36 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1c365ee7882c79cd74cccbf27de272e59587e4b71851827fd144ff9ad6198aa9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 17 17:09:37 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:37Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 17 17:09:37 addons-731605 dockerd[1280]: time="2024-09-17T17:09:37.418382020Z" level=info msg="ignoring event" container=b7808becbd4f6887873d5bd0e3a852450142a927ab29ff4ae28d42d6970a2608 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:38 addons-731605 dockerd[1280]: time="2024-09-17T17:09:38.694378001Z" level=info msg="ignoring event" container=1c365ee7882c79cd74cccbf27de272e59587e4b71851827fd144ff9ad6198aa9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:39 addons-731605 dockerd[1280]: time="2024-09-17T17:09:39.166133987Z" level=info msg="ignoring event" container=24a4a16392dcd1f868c3f045d3ebe339272f76c8ca00f3bee70adca946449480 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:40 addons-731605 dockerd[1280]: time="2024-09-17T17:09:40.230160801Z" level=info msg="ignoring event" container=90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:40 addons-731605 dockerd[1280]: time="2024-09-17T17:09:40.332164413Z" level=info msg="ignoring event" container=82075691c00f7c870a397e86b9da1fbbeb20c95df72a1c3f1efa767c30c353db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:40 addons-731605 dockerd[1280]: time="2024-09-17T17:09:40.571821340Z" level=info msg="ignoring event" container=60fb33f8c319d205b085e361642e2d8c816b39d85dcd51bff92f71aedd0131c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:40 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:40Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-r92r6_kube-system\": unexpected command output nsenter: cannot open /proc/3658/ns/net: No such file or directory\n with error: exit status 1"
	Sep 17 17:09:40 addons-731605 dockerd[1280]: time="2024-09-17T17:09:40.816043905Z" level=info msg="ignoring event" container=eb4d38db100b918816e866d3241ba3bf7ba0ec391ca6b36456cdcc214c125532 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:09:40 addons-731605 cri-dockerd[1537]: time="2024-09-17T17:09:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e66d23dac583851385b9e1ff80a425d15cd442ffc4e379ca4b964360014c424d/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 17 17:09:41 addons-731605 dockerd[1280]: time="2024-09-17T17:09:41.219615663Z" level=info msg="ignoring event" container=1e5306b090cde950c9d295d124a5df5a59d9133620fc195e91dfe74ab606ae89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	1e5306b090cde       fc9db2894f4e4                                                                                                                Less than a second ago   Exited              helper-pod                0                   e66d23dac5838       helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4
	b7808becbd4f6       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                              4 seconds ago            Exited              busybox                   0                   1c365ee7882c7       test-local-path
	777341523cd63       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              9 seconds ago            Exited              helper-pod                0                   bcda20f4f0f20       helper-pod-create-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4
	23b2b53b3329d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            46 seconds ago           Exited              gadget                    7                   c082e529a8874       gadget-rmt5s
	8a68cd541b1d2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago           Running             gcp-auth                  0                   b17af86118a92       gcp-auth-89d5ffd79-qclfh
	6d86e01a997e2       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago           Running             controller                0                   24b97b66662b4       ingress-nginx-controller-bc57996ff-dlbd6
	a5985a5cbe6bc       420193b27261a                                                                                                                12 minutes ago           Exited              patch                     1                   76da58b1a25c2       ingress-nginx-admission-patch-wmwnk
	2bc46ea09841e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   12 minutes ago           Exited              create                    0                   993a7ea37ed36       ingress-nginx-admission-create-h45mt
	fd060edcacb72       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago           Running             metrics-server            0                   08bb05d109808       metrics-server-84c5f94fbc-zjjq7
	19eea6c0b3203       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago           Running             local-path-provisioner    0                   91be9c91543f3       local-path-provisioner-86d989889c-4twxk
	82075691c00f7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago           Exited              registry-proxy            0                   eb4d38db100b9       registry-proxy-r92r6
	1b32503749711       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago           Running             cloud-spanner-emulator    0                   b229efc62870e       cloud-spanner-emulator-769b77f747-4nkn2
	28b85fc14950d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago           Running             minikube-ingress-dns      0                   8f149f15b6b0f       kube-ingress-dns-minikube
	b1206bfa1af02       ba04bb24b9575                                                                                                                12 minutes ago           Running             storage-provisioner       0                   7b09c4d7d64d2       storage-provisioner
	aff19a29674a8       2f6c962e7b831                                                                                                                12 minutes ago           Running             coredns                   0                   f76430450ea4f       coredns-7c65d6cfc9-nfdb2
	1bfe9709592fb       24a140c548c07                                                                                                                12 minutes ago           Running             kube-proxy                0                   ff139dc173e14       kube-proxy-dzqf4
	671d0d8d947c8       d3f53a98c0a9d                                                                                                                13 minutes ago           Running             kube-apiserver            0                   7a7390048d8fc       kube-apiserver-addons-731605
	7a1aea2005d68       7f8aa378bb47d                                                                                                                13 minutes ago           Running             kube-scheduler            0                   c093fd56197c1       kube-scheduler-addons-731605
	cddc5b3da9b13       27e3830e14027                                                                                                                13 minutes ago           Running             etcd                      0                   755d15375289f       etcd-addons-731605
	1e9ef30732ba5       279f381cb3736                                                                                                                13 minutes ago           Running             kube-controller-manager   0                   4241d707d97a3       kube-controller-manager-addons-731605
	
	
	==> controller_ingress [6d86e01a997e] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0917 16:58:17.925838       6 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0917 16:58:17.926076       6 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0917 16:58:17.936059       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0917 16:58:18.538815       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0917 16:58:18.556320       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0917 16:58:18.570474       6 nginx.go:271] "Starting NGINX Ingress controller"
	I0917 16:58:18.583125       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"b37a8e76-dfd4-4875-a3f6-ac9a8bd2add3", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0917 16:58:18.595013       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"ad826244-5b86-4edd-a84f-81f7dd149a3b", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0917 16:58:18.595271       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"424af7ac-27da-4d25-8391-2c5841750d15", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0917 16:58:19.772349       6 nginx.go:317] "Starting NGINX process"
	I0917 16:58:19.772583       6 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0917 16:58:19.772678       6 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0917 16:58:19.772966       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0917 16:58:19.795984       6 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0917 16:58:19.796008       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-dlbd6"
	I0917 16:58:19.813670       6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-dlbd6" node="addons-731605"
	I0917 16:58:19.825734       6 controller.go:213] "Backend successfully reloaded"
	I0917 16:58:19.825811       6 controller.go:224] "Initial sync, sleeping for 1 second"
	I0917 16:58:19.825881       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-dlbd6", UID:"25179970-b297-4b7e-ad11-505505d0f732", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [aff19a29674a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1607234292]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 16:56:52.762) (total time: 30000ms):
	Trace[1607234292]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (16:57:22.762)
	Trace[1607234292]: [30.000332865s] [30.000332865s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1097650227]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 16:56:52.762) (total time: 30000ms):
	Trace[1097650227]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (16:57:22.762)
	Trace[1097650227]: [30.000269351s] [30.000269351s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:60555 - 14179 "HINFO IN 6978345999287434417.5096588345528641219. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012255206s
	[INFO] 10.244.0.25:53039 - 125 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000408366s
	[INFO] 10.244.0.25:42689 - 17165 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000750477s
	[INFO] 10.244.0.25:51585 - 46447 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000758264s
	[INFO] 10.244.0.25:32841 - 51141 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000513498s
	[INFO] 10.244.0.25:52300 - 14719 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012045s
	[INFO] 10.244.0.25:55336 - 21098 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000371042s
	[INFO] 10.244.0.25:38510 - 50227 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002856482s
	[INFO] 10.244.0.25:33772 - 27234 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002867509s
	[INFO] 10.244.0.25:43715 - 47735 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003736837s
	[INFO] 10.244.0.25:36022 - 57122 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004384562s
	
	
	==> describe nodes <==
	Name:               addons-731605
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-731605
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=addons-731605
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T16_56_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-731605
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 16:56:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-731605
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:09:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:05:25 +0000   Tue, 17 Sep 2024 16:56:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:05:25 +0000   Tue, 17 Sep 2024 16:56:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:05:25 +0000   Tue, 17 Sep 2024 16:56:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:05:25 +0000   Tue, 17 Sep 2024 16:56:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-731605
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 286491942543477086e18cdc1090e9c3
	  System UUID:                5a48bcfa-b21a-45c7-a6db-3a28ea6859ee
	  Boot ID:                    fd8b8b92-550b-4c1f-b1a9-b9b8a832f9f6
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  default                     cloud-spanner-emulator-769b77f747-4nkn2                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-rmt5s                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-qclfh                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-dlbd6                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-nfdb2                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-731605                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-731605                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-731605                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-dzqf4                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-731605                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-zjjq7                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-86d989889c-4twxk                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-731605 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-731605 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-731605 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-731605 event: Registered Node addons-731605 in Controller
	
	
	==> dmesg <==
	[Sep17 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015067] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.492852] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.848588] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.621504] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [cddc5b3da9b1] <==
	{"level":"info","ts":"2024-09-17T16:56:40.092282Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T16:56:40.092293Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T16:56:40.471738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-17T16:56:40.471843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-17T16:56:40.471930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-17T16:56:40.471983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:40.472032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:40.472078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:40.472117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-17T16:56:40.479525Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-731605 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T16:56:40.479941Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T16:56:40.480364Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:40.483705Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T16:56:40.484563Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T16:56:40.487711Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T16:56:40.487741Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T16:56:40.488347Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T16:56:40.488906Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T16:56:40.489187Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-17T16:56:40.491961Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:40.492150Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T16:56:40.492264Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:06:41.174241Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1852}
	{"level":"info","ts":"2024-09-17T17:06:41.228246Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1852,"took":"53.308182ms","hash":3964704908,"current-db-size-bytes":9043968,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4960256,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-17T17:06:41.228298Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3964704908,"revision":1852,"compact-revision":-1}
	
	
	==> gcp-auth [8a68cd541b1d] <==
	2024/09/17 16:59:37 GCP Auth Webhook started!
	2024/09/17 16:59:56 Ready to marshal response ...
	2024/09/17 16:59:56 Ready to write response ...
	2024/09/17 16:59:57 Ready to marshal response ...
	2024/09/17 16:59:57 Ready to write response ...
	2024/09/17 17:00:23 Ready to marshal response ...
	2024/09/17 17:00:23 Ready to write response ...
	2024/09/17 17:00:23 Ready to marshal response ...
	2024/09/17 17:00:23 Ready to write response ...
	2024/09/17 17:00:23 Ready to marshal response ...
	2024/09/17 17:00:23 Ready to write response ...
	2024/09/17 17:08:38 Ready to marshal response ...
	2024/09/17 17:08:38 Ready to write response ...
	2024/09/17 17:08:47 Ready to marshal response ...
	2024/09/17 17:08:47 Ready to write response ...
	2024/09/17 17:09:08 Ready to marshal response ...
	2024/09/17 17:09:08 Ready to write response ...
	2024/09/17 17:09:31 Ready to marshal response ...
	2024/09/17 17:09:31 Ready to write response ...
	2024/09/17 17:09:31 Ready to marshal response ...
	2024/09/17 17:09:31 Ready to write response ...
	2024/09/17 17:09:40 Ready to marshal response ...
	2024/09/17 17:09:40 Ready to write response ...
	
	
	==> kernel <==
	 17:09:42 up 52 min,  0 users,  load average: 2.25, 1.17, 0.81
	Linux addons-731605 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [671d0d8d947c] <==
	I0917 17:00:14.070064       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0917 17:00:14.423814       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0917 17:00:14.467633       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0917 17:00:14.517804       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0917 17:00:14.805828       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0917 17:00:15.071738       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0917 17:00:15.204099       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0917 17:00:15.223373       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0917 17:00:15.271368       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0917 17:00:15.518355       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0917 17:00:15.890119       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0917 17:08:55.071542       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 17:09:24.656377       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:09:24.656425       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:09:24.705432       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:09:24.705676       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:09:24.713721       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:09:24.713773       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:09:24.733279       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:09:24.733475       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:09:24.765201       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:09:24.765370       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 17:09:25.715453       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 17:09:25.765651       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0917 17:09:25.877959       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [1e9ef30732ba] <==
	W0917 17:09:26.725332       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:26.725376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:27.226141       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:27.226184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:28.817394       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:28.817440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:29.302630       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:29.302673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:29.600455       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:29.600500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:29.859495       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:29.859538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:30.013805       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:30.013849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:33.261053       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:33.261097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:35.030848       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:35.030917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:35.065227       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:35.065290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:09:35.348738       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:35.348850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:09:40.060997       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.168µs"
	W0917 17:09:40.483418       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:09:40.483499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [1bfe9709592f] <==
	I0917 16:56:52.407348       1 server_linux.go:66] "Using iptables proxy"
	I0917 16:56:52.520837       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 16:56:52.520910       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 16:56:52.563475       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 16:56:52.563723       1 server_linux.go:169] "Using iptables Proxier"
	I0917 16:56:52.567874       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 16:56:52.568819       1 server.go:483] "Version info" version="v1.31.1"
	I0917 16:56:52.568845       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 16:56:52.571471       1 config.go:199] "Starting service config controller"
	I0917 16:56:52.571710       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 16:56:52.571745       1 config.go:105] "Starting endpoint slice config controller"
	I0917 16:56:52.571750       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 16:56:52.576115       1 config.go:328] "Starting node config controller"
	I0917 16:56:52.576432       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 16:56:52.672566       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 16:56:52.672617       1 shared_informer.go:320] Caches are synced for service config
	I0917 16:56:52.676507       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7a1aea2005d6] <==
	W0917 16:56:43.181064       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 16:56:43.181099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:43.181182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:43.181205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:43.180858       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 16:56:43.181273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:44.015591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:44.015770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:44.124198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 16:56:44.124245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:44.176638       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 16:56:44.176760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:44.185995       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 16:56:44.186238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:44.246437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 16:56:44.246647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:44.334306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 16:56:44.334588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:44.358786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 16:56:44.359028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:44.388122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 16:56:44.388166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:56:44.498405       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 16:56:44.498447       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0917 16:56:47.169657       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.263367    2329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06134add-c3e9-4ac8-acd2-14b40b0ed5e0-kube-api-access-6dcsc" (OuterVolumeSpecName: "kube-api-access-6dcsc") pod "06134add-c3e9-4ac8-acd2-14b40b0ed5e0" (UID: "06134add-c3e9-4ac8-acd2-14b40b0ed5e0"). InnerVolumeSpecName "kube-api-access-6dcsc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.331524    2329 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6dcsc\" (UniqueName: \"kubernetes.io/projected/06134add-c3e9-4ac8-acd2-14b40b0ed5e0-kube-api-access-6dcsc\") on node \"addons-731605\" DevicePath \"\""
	Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.331575    2329 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/06134add-c3e9-4ac8-acd2-14b40b0ed5e0-gcp-creds\") on node \"addons-731605\" DevicePath \"\""
	Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.635659    2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c365ee7882c79cd74cccbf27de272e59587e4b71851827fd144ff9ad6198aa9"
	Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.956069    2329 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06134add-c3e9-4ac8-acd2-14b40b0ed5e0" path="/var/lib/kubelet/pods/06134add-c3e9-4ac8-acd2-14b40b0ed5e0/volumes"
	Sep 17 17:09:39 addons-731605 kubelet[2329]: I0917 17:09:39.956460    2329 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eadbb459-0328-4ead-a0b9-83d8977e81e1" path="/var/lib/kubelet/pods/eadbb459-0328-4ead-a0b9-83d8977e81e1/volumes"
	Sep 17 17:09:40 addons-731605 kubelet[2329]: E0917 17:09:40.046550    2329 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eadbb459-0328-4ead-a0b9-83d8977e81e1" containerName="busybox"
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.046811    2329 memory_manager.go:354] "RemoveStaleState removing state" podUID="eadbb459-0328-4ead-a0b9-83d8977e81e1" containerName="busybox"
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.145213    2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b-script\") pod \"helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4\" (UID: \"58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b\") " pod="local-path-storage/helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4"
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.145517    2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b-data\") pod \"helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4\" (UID: \"58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b\") " pod="local-path-storage/helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4"
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.147175    2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b-gcp-creds\") pod \"helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4\" (UID: \"58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b\") " pod="local-path-storage/helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4"
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.147340    2329 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krgw\" (UniqueName: \"kubernetes.io/projected/58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b-kube-api-access-7krgw\") pod \"helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4\" (UID: \"58d0f5fb-9a0b-46a2-baea-513cdfdc4b8b\") " pod="local-path-storage/helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4"
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.653267    2329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncjnl\" (UniqueName: \"kubernetes.io/projected/e7f2fc50-5c03-4aec-9040-85d9963af8e6-kube-api-access-ncjnl\") pod \"e7f2fc50-5c03-4aec-9040-85d9963af8e6\" (UID: \"e7f2fc50-5c03-4aec-9040-85d9963af8e6\") "
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.657933    2329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f2fc50-5c03-4aec-9040-85d9963af8e6-kube-api-access-ncjnl" (OuterVolumeSpecName: "kube-api-access-ncjnl") pod "e7f2fc50-5c03-4aec-9040-85d9963af8e6" (UID: "e7f2fc50-5c03-4aec-9040-85d9963af8e6"). InnerVolumeSpecName "kube-api-access-ncjnl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.694713    2329 scope.go:117] "RemoveContainer" containerID="90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a"
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.768297    2329 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ncjnl\" (UniqueName: \"kubernetes.io/projected/e7f2fc50-5c03-4aec-9040-85d9963af8e6-kube-api-access-ncjnl\") on node \"addons-731605\" DevicePath \"\""
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.787346    2329 scope.go:117] "RemoveContainer" containerID="90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a"
	Sep 17 17:09:40 addons-731605 kubelet[2329]: E0917 17:09:40.831642    2329 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a" containerID="90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a"
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.831822    2329 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a"} err="failed to get container status \"90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a\": rpc error: code = Unknown desc = Error response from daemon: No such container: 90ab266308be339d7c9c682c950765d456ab2245e1932b4f1d24143e258d8b1a"
	Sep 17 17:09:40 addons-731605 kubelet[2329]: I0917 17:09:40.905918    2329 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e66d23dac583851385b9e1ff80a425d15cd442ffc4e379ca4b964360014c424d"
	Sep 17 17:09:41 addons-731605 kubelet[2329]: I0917 17:09:41.072254    2329 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmjmj\" (UniqueName: \"kubernetes.io/projected/5d64f5cf-2b0e-40f7-88ca-5822f9941c5a-kube-api-access-kmjmj\") pod \"5d64f5cf-2b0e-40f7-88ca-5822f9941c5a\" (UID: \"5d64f5cf-2b0e-40f7-88ca-5822f9941c5a\") "
	Sep 17 17:09:41 addons-731605 kubelet[2329]: I0917 17:09:41.076813    2329 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d64f5cf-2b0e-40f7-88ca-5822f9941c5a-kube-api-access-kmjmj" (OuterVolumeSpecName: "kube-api-access-kmjmj") pod "5d64f5cf-2b0e-40f7-88ca-5822f9941c5a" (UID: "5d64f5cf-2b0e-40f7-88ca-5822f9941c5a"). InnerVolumeSpecName "kube-api-access-kmjmj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:09:41 addons-731605 kubelet[2329]: I0917 17:09:41.174310    2329 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kmjmj\" (UniqueName: \"kubernetes.io/projected/5d64f5cf-2b0e-40f7-88ca-5822f9941c5a-kube-api-access-kmjmj\") on node \"addons-731605\" DevicePath \"\""
	Sep 17 17:09:41 addons-731605 kubelet[2329]: I0917 17:09:41.936033    2329 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f2fc50-5c03-4aec-9040-85d9963af8e6" path="/var/lib/kubelet/pods/e7f2fc50-5c03-4aec-9040-85d9963af8e6/volumes"
	Sep 17 17:09:41 addons-731605 kubelet[2329]: I0917 17:09:41.970678    2329 scope.go:117] "RemoveContainer" containerID="82075691c00f7c870a397e86b9da1fbbeb20c95df72a1c3f1efa767c30c353db"
	
	
	==> storage-provisioner [b1206bfa1af0] <==
	I0917 16:56:57.816348       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 16:56:57.849407       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 16:56:57.849451       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 16:56:57.878324       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 16:56:57.878502       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-731605_ab94a06c-941e-418f-85c3-b8f646185e3f!
	I0917 16:56:57.879827       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49612a31-d7be-4ca6-b014-a63c8813aa59", APIVersion:"v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-731605_ab94a06c-941e-418f-85c3-b8f646185e3f became leader
	I0917 16:56:57.979083       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-731605_ab94a06c-941e-418f-85c3-b8f646185e3f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-731605 -n addons-731605
helpers_test.go:261: (dbg) Run:  kubectl --context addons-731605 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-h45mt ingress-nginx-admission-patch-wmwnk helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-731605 describe pod busybox ingress-nginx-admission-create-h45mt ingress-nginx-admission-patch-wmwnk helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-731605 describe pod busybox ingress-nginx-admission-create-h45mt ingress-nginx-admission-patch-wmwnk helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4: exit status 1 (153.602708ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-731605/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 17:00:23 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wkp4b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wkp4b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m20s                   default-scheduler  Successfully assigned default/busybox to addons-731605
	  Normal   Pulling    7m52s (x4 over 9m19s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m51s (x4 over 9m19s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m51s (x4 over 9m19s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m26s (x6 over 9m19s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m12s (x20 over 9m19s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-h45mt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wmwnk" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-731605 describe pod busybox ingress-nginx-admission-create-h45mt ingress-nginx-admission-patch-wmwnk helper-pod-delete-pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4: exit status 1
--- FAIL: TestAddons/parallel/Registry (76.48s)

                                                
                                    

Test pass (318/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.75
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.15
9 TestDownloadOnly/v1.20.0/DeleteAll 0.33
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.1/json-events 5.58
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.57
22 TestOffline 60.71
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 222.85
29 TestAddons/serial/Volcano 44.2
31 TestAddons/serial/GCPAuth/Namespaces 0.18
34 TestAddons/parallel/Ingress 20.62
35 TestAddons/parallel/InspektorGadget 12.54
36 TestAddons/parallel/MetricsServer 6.72
39 TestAddons/parallel/CSI 46.56
40 TestAddons/parallel/Headlamp 16.66
41 TestAddons/parallel/CloudSpanner 6.71
42 TestAddons/parallel/LocalPath 9.91
43 TestAddons/parallel/NvidiaDevicePlugin 5.5
44 TestAddons/parallel/Yakd 11.74
45 TestAddons/StoppedEnableDisable 11.29
46 TestCertOptions 43.85
47 TestCertExpiration 250.2
48 TestDockerFlags 45.75
49 TestForceSystemdFlag 55.2
50 TestForceSystemdEnv 42.67
56 TestErrorSpam/setup 35.73
57 TestErrorSpam/start 0.81
58 TestErrorSpam/status 1.14
59 TestErrorSpam/pause 1.43
60 TestErrorSpam/unpause 1.51
61 TestErrorSpam/stop 11
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 77.37
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 33.46
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.54
73 TestFunctional/serial/CacheCmd/cache/add_local 0.99
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
78 TestFunctional/serial/CacheCmd/cache/delete 0.13
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 45.23
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.17
84 TestFunctional/serial/LogsFileCmd 1.19
85 TestFunctional/serial/InvalidService 4.83
87 TestFunctional/parallel/ConfigCmd 0.46
88 TestFunctional/parallel/DashboardCmd 14.79
89 TestFunctional/parallel/DryRun 0.59
90 TestFunctional/parallel/InternationalLanguage 0.24
91 TestFunctional/parallel/StatusCmd 1.41
95 TestFunctional/parallel/ServiceCmdConnect 12.73
96 TestFunctional/parallel/AddonsCmd 0.22
97 TestFunctional/parallel/PersistentVolumeClaim 28.44
99 TestFunctional/parallel/SSHCmd 0.67
100 TestFunctional/parallel/CpCmd 2.28
102 TestFunctional/parallel/FileSync 0.29
103 TestFunctional/parallel/CertSync 2.06
107 TestFunctional/parallel/NodeLabels 0.13
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
111 TestFunctional/parallel/License 0.29
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.3
124 TestFunctional/parallel/ServiceCmd/List 0.51
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
128 TestFunctional/parallel/ProfileCmd/profile_list 0.52
129 TestFunctional/parallel/ServiceCmd/Format 0.53
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
131 TestFunctional/parallel/ServiceCmd/URL 0.5
132 TestFunctional/parallel/MountCmd/any-port 9.81
133 TestFunctional/parallel/MountCmd/specific-port 2.32
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.85
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.15
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.29
142 TestFunctional/parallel/ImageCommands/Setup 4.61
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
146 TestFunctional/parallel/DockerEnv/bash 1.08
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.3
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
154 TestFunctional/delete_echo-server_images 0.05
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 136.51
161 TestMultiControlPlane/serial/DeployApp 9.09
162 TestMultiControlPlane/serial/PingHostFromPods 1.73
163 TestMultiControlPlane/serial/AddWorkerNode 27.59
164 TestMultiControlPlane/serial/NodeLabels 0.1
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
166 TestMultiControlPlane/serial/CopyFile 20.17
167 TestMultiControlPlane/serial/StopSecondaryNode 11.74
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.61
169 TestMultiControlPlane/serial/RestartSecondaryNode 59.19
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.77
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 186.04
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.67
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
174 TestMultiControlPlane/serial/StopCluster 33.19
175 TestMultiControlPlane/serial/RestartCluster 153.23
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
177 TestMultiControlPlane/serial/AddSecondaryNode 48.24
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
181 TestImageBuild/serial/Setup 35.31
182 TestImageBuild/serial/NormalBuild 2.07
183 TestImageBuild/serial/BuildWithBuildArg 1.06
184 TestImageBuild/serial/BuildWithDockerIgnore 0.9
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.76
189 TestJSONOutput/start/Command 43.18
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.63
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.61
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.94
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.21
214 TestKicCustomNetwork/create_custom_network 34.49
215 TestKicCustomNetwork/use_default_bridge_network 35.32
216 TestKicExistingNetwork 32.08
217 TestKicCustomSubnet 39.34
218 TestKicStaticIP 34.53
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 69.65
223 TestMountStart/serial/StartWithMountFirst 11.05
224 TestMountStart/serial/VerifyMountFirst 0.28
225 TestMountStart/serial/StartWithMountSecond 8.36
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.46
228 TestMountStart/serial/VerifyMountPostDelete 0.29
229 TestMountStart/serial/Stop 1.22
230 TestMountStart/serial/RestartStopped 8.85
231 TestMountStart/serial/VerifyMountPostStop 0.31
234 TestMultiNode/serial/FreshStart2Nodes 84.98
235 TestMultiNode/serial/DeployApp2Nodes 56.02
236 TestMultiNode/serial/PingHostFrom2Pods 1.1
237 TestMultiNode/serial/AddNode 19.16
238 TestMultiNode/serial/MultiNodeLabels 0.11
239 TestMultiNode/serial/ProfileList 0.36
240 TestMultiNode/serial/CopyFile 10.43
241 TestMultiNode/serial/StopNode 2.31
242 TestMultiNode/serial/StartAfterStop 11.16
243 TestMultiNode/serial/RestartKeepsNodes 107.84
244 TestMultiNode/serial/DeleteNode 5.72
245 TestMultiNode/serial/StopMultiNode 21.82
246 TestMultiNode/serial/RestartMultiNode 57.4
247 TestMultiNode/serial/ValidateNameConflict 36.18
252 TestPreload 141.39
254 TestScheduledStopUnix 107.34
255 TestSkaffold 120.39
257 TestInsufficientStorage 11.66
258 TestRunningBinaryUpgrade 135.71
260 TestKubernetesUpgrade 130.51
261 TestMissingContainerUpgrade 133.23
273 TestStoppedBinaryUpgrade/Setup 0.87
274 TestStoppedBinaryUpgrade/Upgrade 96.15
276 TestPause/serial/Start 56.1
277 TestStoppedBinaryUpgrade/MinikubeLogs 2.43
286 TestNoKubernetes/serial/StartNoK8sWithVersion 0.17
287 TestNoKubernetes/serial/StartWithK8s 39.63
288 TestPause/serial/SecondStartNoReconfiguration 31.14
289 TestNoKubernetes/serial/StartWithStopK8s 17.35
290 TestPause/serial/Pause 0.62
291 TestPause/serial/VerifyStatus 0.32
292 TestPause/serial/Unpause 0.51
293 TestPause/serial/PauseAgain 0.7
294 TestPause/serial/DeletePaused 2.35
295 TestPause/serial/VerifyDeletedResources 0.45
296 TestNetworkPlugins/group/auto/Start 88.31
297 TestNoKubernetes/serial/Start 12.99
298 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
299 TestNoKubernetes/serial/ProfileList 1.15
300 TestNoKubernetes/serial/Stop 1.29
301 TestNoKubernetes/serial/StartNoArgs 8.29
302 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
303 TestNetworkPlugins/group/kindnet/Start 72.5
304 TestNetworkPlugins/group/auto/KubeletFlags 0.3
305 TestNetworkPlugins/group/auto/NetCatPod 11.36
306 TestNetworkPlugins/group/auto/DNS 0.26
307 TestNetworkPlugins/group/auto/Localhost 0.17
308 TestNetworkPlugins/group/auto/HairPin 0.22
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
311 TestNetworkPlugins/group/kindnet/NetCatPod 11.38
312 TestNetworkPlugins/group/kindnet/DNS 0.29
313 TestNetworkPlugins/group/kindnet/Localhost 0.31
314 TestNetworkPlugins/group/kindnet/HairPin 0.24
315 TestNetworkPlugins/group/calico/Start 86.25
316 TestNetworkPlugins/group/custom-flannel/Start 65.73
317 TestNetworkPlugins/group/calico/ControllerPod 5.07
318 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.3
320 TestNetworkPlugins/group/calico/KubeletFlags 0.6
321 TestNetworkPlugins/group/calico/NetCatPod 11.42
322 TestNetworkPlugins/group/custom-flannel/DNS 0.29
323 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
324 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
325 TestNetworkPlugins/group/calico/DNS 0.2
326 TestNetworkPlugins/group/calico/Localhost 0.17
327 TestNetworkPlugins/group/calico/HairPin 0.17
328 TestNetworkPlugins/group/false/Start 55.6
329 TestNetworkPlugins/group/enable-default-cni/Start 87.78
330 TestNetworkPlugins/group/false/KubeletFlags 0.32
331 TestNetworkPlugins/group/false/NetCatPod 11.28
332 TestNetworkPlugins/group/false/DNS 0.19
333 TestNetworkPlugins/group/false/Localhost 0.18
334 TestNetworkPlugins/group/false/HairPin 0.18
335 TestNetworkPlugins/group/flannel/Start 60.51
336 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
337 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.38
338 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
339 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
340 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
341 TestNetworkPlugins/group/bridge/Start 47.37
342 TestNetworkPlugins/group/flannel/ControllerPod 6.01
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
344 TestNetworkPlugins/group/flannel/NetCatPod 12.4
345 TestNetworkPlugins/group/flannel/DNS 0.25
346 TestNetworkPlugins/group/flannel/Localhost 0.16
347 TestNetworkPlugins/group/flannel/HairPin 0.23
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
349 TestNetworkPlugins/group/bridge/NetCatPod 10.46
350 TestNetworkPlugins/group/bridge/DNS 0.25
351 TestNetworkPlugins/group/bridge/Localhost 0.23
352 TestNetworkPlugins/group/bridge/HairPin 0.22
353 TestNetworkPlugins/group/kubenet/Start 77.76
355 TestStartStop/group/old-k8s-version/serial/FirstStart 166.91
356 TestNetworkPlugins/group/kubenet/KubeletFlags 0.41
357 TestNetworkPlugins/group/kubenet/NetCatPod 12.35
358 TestNetworkPlugins/group/kubenet/DNS 0.29
359 TestNetworkPlugins/group/kubenet/Localhost 0.25
360 TestNetworkPlugins/group/kubenet/HairPin 0.26
362 TestStartStop/group/no-preload/serial/FirstStart 53.14
363 TestStartStop/group/no-preload/serial/DeployApp 9.34
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
365 TestStartStop/group/no-preload/serial/Stop 11.01
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
367 TestStartStop/group/no-preload/serial/SecondStart 270.32
368 TestStartStop/group/old-k8s-version/serial/DeployApp 9.97
369 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.48
370 TestStartStop/group/old-k8s-version/serial/Stop 11.67
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
372 TestStartStop/group/old-k8s-version/serial/SecondStart 140.68
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
376 TestStartStop/group/old-k8s-version/serial/Pause 2.84
378 TestStartStop/group/embed-certs/serial/FirstStart 46.88
379 TestStartStop/group/embed-certs/serial/DeployApp 9.38
380 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
381 TestStartStop/group/embed-certs/serial/Stop 11.26
382 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
383 TestStartStop/group/embed-certs/serial/SecondStart 268.13
384 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
387 TestStartStop/group/no-preload/serial/Pause 3.11
389 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.46
390 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.39
391 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
392 TestStartStop/group/default-k8s-diff-port/serial/Stop 11
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
394 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.18
395 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
397 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
398 TestStartStop/group/embed-certs/serial/Pause 3.03
400 TestStartStop/group/newest-cni/serial/FirstStart 39.16
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.26
403 TestStartStop/group/newest-cni/serial/Stop 10.94
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
405 TestStartStop/group/newest-cni/serial/SecondStart 21.06
406 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
409 TestStartStop/group/newest-cni/serial/Pause 3.77
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.85
x
+
TestDownloadOnly/v1.20.0/json-events (9.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-017300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-017300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (9.748490164s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-017300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-017300: exit status 85 (151.728316ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-017300 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |          |
	|         | -p download-only-017300        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:38
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:38.400166    7567 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:38.400310    7567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:38.400322    7567 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:38.400328    7567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:38.400621    7567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
	W0917 16:55:38.400782    7567 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19662-2253/.minikube/config/config.json: open /home/jenkins/minikube-integration/19662-2253/.minikube/config/config.json: no such file or directory
	I0917 16:55:38.401237    7567 out.go:352] Setting JSON to true
	I0917 16:55:38.402037    7567 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2285,"bootTime":1726589854,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 16:55:38.402105    7567 start.go:139] virtualization:  
	I0917 16:55:38.405075    7567 out.go:97] [download-only-017300] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0917 16:55:38.405267    7567 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 16:55:38.405304    7567 notify.go:220] Checking for updates...
	I0917 16:55:38.407730    7567 out.go:169] MINIKUBE_LOCATION=19662
	I0917 16:55:38.409651    7567 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:38.411583    7567 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig
	I0917 16:55:38.413415    7567 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube
	I0917 16:55:38.415160    7567 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0917 16:55:38.419434    7567 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 16:55:38.419767    7567 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:38.450905    7567 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 16:55:38.451066    7567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:38.762481    7567 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-17 16:55:38.752747134 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 16:55:38.762589    7567 docker.go:318] overlay module found
	I0917 16:55:38.764827    7567 out.go:97] Using the docker driver based on user configuration
	I0917 16:55:38.764857    7567 start.go:297] selected driver: docker
	I0917 16:55:38.764864    7567 start.go:901] validating driver "docker" against <nil>
	I0917 16:55:38.764970    7567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:38.819168    7567 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-17 16:55:38.810056625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 16:55:38.819372    7567 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:55:38.819672    7567 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0917 16:55:38.819912    7567 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 16:55:38.821955    7567 out.go:169] Using Docker driver with root privileges
	I0917 16:55:38.823780    7567 cni.go:84] Creating CNI manager for ""
	I0917 16:55:38.823857    7567 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 16:55:38.823934    7567 start.go:340] cluster config:
	{Name:download-only-017300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-017300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:55:38.826207    7567 out.go:97] Starting "download-only-017300" primary control-plane node in "download-only-017300" cluster
	I0917 16:55:38.826230    7567 cache.go:121] Beginning downloading kic base image for docker with docker
	I0917 16:55:38.827972    7567 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0917 16:55:38.828002    7567 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 16:55:38.828143    7567 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 16:55:38.842582    7567 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:55:38.842758    7567 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 16:55:38.842863    7567 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:55:38.887786    7567 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 16:55:38.887819    7567 cache.go:56] Caching tarball of preloaded images
	I0917 16:55:38.887968    7567 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 16:55:38.895557    7567 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0917 16:55:38.895591    7567 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 16:55:38.988164    7567 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-017300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-017300"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-017300
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-253478 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-253478 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.575423612s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-253478
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-253478: exit status 85 (73.672623ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-017300 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-017300        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-017300        | download-only-017300 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | -o=json --download-only        | download-only-253478 | jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-253478        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:48.850052    7770 out.go:345] Setting OutFile to fd 1 ...
	I0917 16:55:48.850175    7770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:48.850180    7770 out.go:358] Setting ErrFile to fd 2...
	I0917 16:55:48.850192    7770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:48.850538    7770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
	I0917 16:55:48.851038    7770 out.go:352] Setting JSON to true
	I0917 16:55:48.852093    7770 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2295,"bootTime":1726589854,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 16:55:48.852166    7770 start.go:139] virtualization:  
	I0917 16:55:48.881348    7770 out.go:97] [download-only-253478] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0917 16:55:48.881660    7770 notify.go:220] Checking for updates...
	I0917 16:55:48.903780    7770 out.go:169] MINIKUBE_LOCATION=19662
	I0917 16:55:48.924629    7770 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:55:48.944883    7770 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig
	I0917 16:55:48.970108    7770 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube
	I0917 16:55:48.988605    7770 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0917 16:55:49.044783    7770 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 16:55:49.045066    7770 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:49.065811    7770 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 16:55:49.066025    7770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:49.135063    7770 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-17 16:55:49.125521653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 16:55:49.135176    7770 docker.go:318] overlay module found
	I0917 16:55:49.147779    7770 out.go:97] Using the docker driver based on user configuration
	I0917 16:55:49.147818    7770 start.go:297] selected driver: docker
	I0917 16:55:49.147826    7770 start.go:901] validating driver "docker" against <nil>
	I0917 16:55:49.147950    7770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:49.204108    7770 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-17 16:55:49.194728151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 16:55:49.204307    7770 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:55:49.204619    7770 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0917 16:55:49.204788    7770 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 16:55:49.217403    7770 out.go:169] Using Docker driver with root privileges
	I0917 16:55:49.226282    7770 cni.go:84] Creating CNI manager for ""
	I0917 16:55:49.226368    7770 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:55:49.226382    7770 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:55:49.226462    7770 start.go:340] cluster config:
	{Name:download-only-253478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-253478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:55:49.235781    7770 out.go:97] Starting "download-only-253478" primary control-plane node in "download-only-253478" cluster
	I0917 16:55:49.235821    7770 cache.go:121] Beginning downloading kic base image for docker with docker
	I0917 16:55:49.245563    7770 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0917 16:55:49.245618    7770 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:55:49.245710    7770 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 16:55:49.261935    7770 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:55:49.262085    7770 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 16:55:49.262104    7770 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0917 16:55:49.262109    7770 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0917 16:55:49.262116    7770 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0917 16:55:49.311316    7770 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 16:55:49.311341    7770 cache.go:56] Caching tarball of preloaded images
	I0917 16:55:49.311496    7770 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:55:49.321023    7770 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0917 16:55:49.321054    7770 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0917 16:55:49.404994    7770 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 16:55:52.940147    7770 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0917 16:55:52.940245    7770 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19662-2253/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0917 16:55:53.690737    7770 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 16:55:53.691121    7770 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/download-only-253478/config.json ...
	I0917 16:55:53.691160    7770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/download-only-253478/config.json: {Name:mk63d145f3d3ad2ad55a561c7726bc5a799dfbab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:55:53.691343    7770 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:55:53.691499    7770 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19662-2253/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-253478 host does not exist
	  To start a cluster, run: "minikube start -p download-only-253478"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-253478
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-460466 --alsologtostderr --binary-mirror http://127.0.0.1:37897 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-460466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-460466
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (60.71s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-498213 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-498213 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (58.439756818s)
helpers_test.go:175: Cleaning up "offline-docker-498213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-498213
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-498213: (2.268131137s)
--- PASS: TestOffline (60.71s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-731605
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-731605: exit status 85 (69.651484ms)

                                                
                                                
-- stdout --
	* Profile "addons-731605" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-731605"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-731605
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-731605: exit status 85 (67.428123ms)

                                                
                                                
-- stdout --
	* Profile "addons-731605" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-731605"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (222.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-731605 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-731605 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m42.846461994s)
--- PASS: TestAddons/Setup (222.85s)

                                                
                                    
x
+
TestAddons/serial/Volcano (44.2s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 59.408556ms
addons_test.go:913: volcano-controller stabilized in 60.247952ms
addons_test.go:897: volcano-scheduler stabilized in 60.462523ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-c5rq5" [2cc0f26c-02cc-4b68-90a0-2dd506214eeb] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004895496s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-hh4j8" [1b0154c9-7aff-4ba0-8b90-360df414626b] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003750398s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-dlh5d" [3c6b4f7a-5f4b-4164-b6d0-4bb34a12140b] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.006497596s
addons_test.go:932: (dbg) Run:  kubectl --context addons-731605 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-731605 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-731605 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4f6a7283-ce8e-4c1b-8239-d6daae2325a4] Pending
helpers_test.go:344: "test-job-nginx-0" [4f6a7283-ce8e-4c1b-8239-d6daae2325a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [4f6a7283-ce8e-4c1b-8239-d6daae2325a4] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.004243797s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-731605 addons disable volcano --alsologtostderr -v=1: (10.46562726s)
--- PASS: TestAddons/serial/Volcano (44.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-731605 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-731605 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-731605 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-731605 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-731605 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6e68aa60-97d5-4ed9-ba98-d20bf8f9c23b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6e68aa60-97d5-4ed9-ba98-d20bf8f9c23b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003749711s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-731605 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-731605 addons disable ingress-dns --alsologtostderr -v=1: (1.461598557s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-731605 addons disable ingress --alsologtostderr -v=1: (7.741068358s)
--- PASS: TestAddons/parallel/Ingress (20.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.54s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rmt5s" [0d73ade8-bbad-4820-b99c-aeeb4ec606d2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003982356s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-731605
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-731605: (6.537062856s)
--- PASS: TestAddons/parallel/InspektorGadget (12.54s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.12571ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zjjq7" [5244e1a0-b041-4b8b-9a1a-97aa3d2df4f0] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.006317819s
addons_test.go:417: (dbg) Run:  kubectl --context addons-731605 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.306836ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-731605 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-731605 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a8de2daa-42c8-45fd-b9b2-9c896b3b3cd0] Pending
helpers_test.go:344: "task-pv-pod" [a8de2daa-42c8-45fd-b9b2-9c896b3b3cd0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a8de2daa-42c8-45fd-b9b2-9c896b3b3cd0] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003957206s
addons_test.go:590: (dbg) Run:  kubectl --context addons-731605 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-731605 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-731605 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-731605 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-731605 delete pod task-pv-pod: (1.570539746s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-731605 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-731605 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-731605 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [95510457-e71c-4f7f-9a01-46fea1749427] Pending
helpers_test.go:344: "task-pv-pod-restore" [95510457-e71c-4f7f-9a01-46fea1749427] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [95510457-e71c-4f7f-9a01-46fea1749427] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003628007s
addons_test.go:632: (dbg) Run:  kubectl --context addons-731605 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-731605 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-731605 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-731605 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.704628401s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-731605 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-8rxcl" [f082c229-4d7f-42e8-81bf-83f3958f353e] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-8rxcl" [f082c229-4d7f-42e8-81bf-83f3958f353e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-8rxcl" [f082c229-4d7f-42e8-81bf-83f3958f353e] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-8rxcl" [f082c229-4d7f-42e8-81bf-83f3958f353e] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.005031052s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-731605 addons disable headlamp --alsologtostderr -v=1: (5.689223996s)
--- PASS: TestAddons/parallel/Headlamp (16.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-4nkn2" [e8b3fb76-303a-4fd8-946d-ba9826abfdf5] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005548129s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-731605
--- PASS: TestAddons/parallel/CloudSpanner (6.71s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.91s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-731605 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-731605 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-731605 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [eadbb459-0328-4ead-a0b9-83d8977e81e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [eadbb459-0328-4ead-a0b9-83d8977e81e1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [eadbb459-0328-4ead-a0b9-83d8977e81e1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005233608s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-731605 get pvc test-pvc -o=json
2024/09/17 17:09:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 ssh "cat /opt/local-path-provisioner/pvc-2a686ca8-31e7-49b1-9287-cefb015b44f4_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-731605 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-731605 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.91s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9bwdv" [611e2832-baef-4884-ac81-badda29286e4] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.007571674s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-731605
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2bbn7" [5562cdf1-482f-448f-b8aa-069e25ba6473] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003423442s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-731605 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-731605 addons disable yakd --alsologtostderr -v=1: (5.740322887s)
--- PASS: TestAddons/parallel/Yakd (11.74s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-731605
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-731605: (10.996861049s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-731605
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-731605
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-731605
--- PASS: TestAddons/StoppedEnableDisable (11.29s)

                                                
                                    
x
+
TestCertOptions (43.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-350251 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-350251 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (40.898218411s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-350251 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-350251 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-350251 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-350251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-350251
E0917 17:49:20.277916    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-350251: (2.165847866s)
--- PASS: TestCertOptions (43.85s)

                                                
                                    
x
+
TestCertExpiration (250.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-125264 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-125264 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (43.478632259s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-125264 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0917 17:52:23.344697    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-125264 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.582174851s)
helpers_test.go:175: Cleaning up "cert-expiration-125264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-125264
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-125264: (2.142074336s)
--- PASS: TestCertExpiration (250.20s)

                                                
                                    
x
+
TestDockerFlags (45.75s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-636819 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-636819 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.233072052s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-636819 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-636819 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-636819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-636819
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-636819: (2.71151279s)
--- PASS: TestDockerFlags (45.75s)

                                                
                                    
x
+
TestForceSystemdFlag (55.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-911948 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0917 17:47:42.174166    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-911948 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (52.240396304s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-911948 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-911948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-911948
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-911948: (2.392635176s)
--- PASS: TestForceSystemdFlag (55.20s)

                                                
                                    
x
+
TestForceSystemdEnv (42.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-672454 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-672454 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.800361672s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-672454 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-672454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-672454
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-672454: (2.410045011s)
--- PASS: TestForceSystemdEnv (42.67s)

                                                
                                    
x
+
TestErrorSpam/setup (35.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-877230 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-877230 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-877230 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-877230 --driver=docker  --container-runtime=docker: (35.733443004s)
--- PASS: TestErrorSpam/setup (35.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (11s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 stop: (10.808206341s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-877230 --log_dir /tmp/nospam-877230 stop
--- PASS: TestErrorSpam/stop (11.00s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19662-2253/.minikube/files/etc/test/nested/copy/7562/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612770 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-612770 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m17.366285989s)
--- PASS: TestFunctional/serial/StartWithProxy (77.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612770 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-612770 --alsologtostderr -v=8: (33.454222505s)
functional_test.go:663: soft start took 33.458168429s for "functional-612770" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-612770 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-612770 cache add registry.k8s.io/pause:3.1: (1.209446389s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-612770 cache add registry.k8s.io/pause:3.3: (1.19718855s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-612770 cache add registry.k8s.io/pause:latest: (1.1289788s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-612770 /tmp/TestFunctionalserialCacheCmdcacheadd_local4155571469/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 cache add minikube-local-cache-test:functional-612770
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 cache delete minikube-local-cache-test:functional-612770
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-612770
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612770 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (308.538013ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 kubectl -- --context functional-612770 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-612770 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612770 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-612770 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.231896861s)
functional_test.go:761: restart took 45.232018725s for "functional-612770" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-612770 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-612770 logs: (1.173284887s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 logs --file /tmp/TestFunctionalserialLogsFileCmd1605711582/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-612770 logs --file /tmp/TestFunctionalserialLogsFileCmd1605711582/001/logs.txt: (1.184169003s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.83s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-612770 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-612770
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-612770: exit status 115 (594.498673ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32308 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-612770 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.83s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612770 config get cpus: exit status 14 (75.029248ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612770 config get cpus: exit status 14 (55.410391ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-612770 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-612770 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49220: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.79s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612770 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-612770 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (249.606836ms)

                                                
                                                
-- stdout --
	* [functional-612770] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:14:52.476105   48823 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:14:52.476309   48823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:14:52.476320   48823 out.go:358] Setting ErrFile to fd 2...
	I0917 17:14:52.476326   48823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:14:52.476689   48823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
	I0917 17:14:52.477172   48823 out.go:352] Setting JSON to false
	I0917 17:14:52.478354   48823 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3439,"bootTime":1726589854,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 17:14:52.478447   48823 start.go:139] virtualization:  
	I0917 17:14:52.482749   48823 out.go:177] * [functional-612770] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0917 17:14:52.484853   48823 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:14:52.484899   48823 notify.go:220] Checking for updates...
	I0917 17:14:52.491215   48823 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:14:52.493193   48823 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig
	I0917 17:14:52.494980   48823 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube
	I0917 17:14:52.496794   48823 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 17:14:52.498647   48823 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:14:52.501133   48823 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:14:52.501728   48823 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:14:52.547665   48823 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 17:14:52.547856   48823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:14:52.633155   48823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-17 17:14:52.62278748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:14:52.633294   48823 docker.go:318] overlay module found
	I0917 17:14:52.636757   48823 out.go:177] * Using the docker driver based on existing profile
	I0917 17:14:52.638616   48823 start.go:297] selected driver: docker
	I0917 17:14:52.638638   48823 start.go:901] validating driver "docker" against &{Name:functional-612770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-612770 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:14:52.638878   48823 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:14:52.641675   48823 out.go:201] 
	W0917 17:14:52.643817   48823 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 17:14:52.646032   48823 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612770 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-612770 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-612770 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (241.493151ms)

                                                
                                                
-- stdout --
	* [functional-612770] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:14:52.257914   48750 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:14:52.258338   48750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:14:52.258345   48750 out.go:358] Setting ErrFile to fd 2...
	I0917 17:14:52.258350   48750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:14:52.259372   48750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
	I0917 17:14:52.259796   48750 out.go:352] Setting JSON to false
	I0917 17:14:52.261053   48750 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3439,"bootTime":1726589854,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 17:14:52.261120   48750 start.go:139] virtualization:  
	I0917 17:14:52.265707   48750 out.go:177] * [functional-612770] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0917 17:14:52.267937   48750 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:14:52.268002   48750 notify.go:220] Checking for updates...
	I0917 17:14:52.273109   48750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:14:52.275065   48750 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig
	I0917 17:14:52.277295   48750 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube
	I0917 17:14:52.279273   48750 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 17:14:52.281533   48750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:14:52.284109   48750 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:14:52.284647   48750 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:14:52.315322   48750 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 17:14:52.315447   48750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:14:52.389979   48750 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-17 17:14:52.378818357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:14:52.390100   48750 docker.go:318] overlay module found
	I0917 17:14:52.392547   48750 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0917 17:14:52.394407   48750 start.go:297] selected driver: docker
	I0917 17:14:52.394427   48750 start.go:901] validating driver "docker" against &{Name:functional-612770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-612770 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:14:52.394530   48750 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:14:52.396914   48750 out.go:201] 
	W0917 17:14:52.398692   48750 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 17:14:52.400703   48750 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-612770 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-612770 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-5fbsv" [46b9b167-348e-42ff-882f-8fc29e9df7d9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-5fbsv" [46b9b167-348e-42ff-882f-8fc29e9df7d9] Running
E0917 17:14:39.106682    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:39.113548    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:39.125093    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:39.146589    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:39.188193    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:39.269757    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004250934s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 service hello-node-connect --url
E0917 17:14:41.677231    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31839
functional_test.go:1675: http://192.168.49.2:31839: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-5fbsv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31839
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2735c173-93d5-4c0e-a873-56b9a2d5398f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003460563s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-612770 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-612770 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-612770 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-612770 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4fee0acd-1c3f-421d-8860-1ccd58557831] Pending
helpers_test.go:344: "sp-pod" [4fee0acd-1c3f-421d-8860-1ccd58557831] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4fee0acd-1c3f-421d-8860-1ccd58557831] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004167893s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-612770 exec sp-pod -- touch /tmp/mount/foo
E0917 17:14:39.431857    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-612770 delete -f testdata/storage-provisioner/pod.yaml
E0917 17:14:39.753817    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:14:40.395418    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-612770 delete -f testdata/storage-provisioner/pod.yaml: (1.380905238s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-612770 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dc4f0059-80f2-4da5-8c2e-b6811a7448b9] Pending
helpers_test.go:344: "sp-pod" [dc4f0059-80f2-4da5-8c2e-b6811a7448b9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dc4f0059-80f2-4da5-8c2e-b6811a7448b9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006398621s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-612770 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh -n functional-612770 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 cp functional-612770:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2420972896/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh -n functional-612770 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh -n functional-612770 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7562/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "sudo cat /etc/test/nested/copy/7562/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7562.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "sudo cat /etc/ssl/certs/7562.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7562.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "sudo cat /usr/share/ca-certificates/7562.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75622.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "sudo cat /etc/ssl/certs/75622.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75622.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "sudo cat /usr/share/ca-certificates/75622.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-612770 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612770 ssh "sudo systemctl is-active crio": exit status 1 (396.492312ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-612770 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-612770 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-612770 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-612770 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 46112: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-612770 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-612770 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c7fd5d0c-ff4f-4b2b-843f-88a89a55f6d8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c7fd5d0c-ff4f-4b2b-843f-88a89a55f6d8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004219906s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-612770 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.142.198 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-612770 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-612770 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-612770 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-jwcqg" [71d76170-9b17-4bbc-aaf0-17daa4f0200f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-jwcqg" [71d76170-9b17-4bbc-aaf0-17daa4f0200f] Running
E0917 17:14:44.239214    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004001713s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 service list -o json
functional_test.go:1494: Took "511.960728ms" to run "out/minikube-linux-arm64 -p functional-612770 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0917 17:14:49.360612    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30595
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "421.055512ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "95.122362ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "454.749532ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "90.094322ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30595
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-612770 /tmp/TestFunctionalparallelMountCmdany-port118428129/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726593290754958293" to /tmp/TestFunctionalparallelMountCmdany-port118428129/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726593290754958293" to /tmp/TestFunctionalparallelMountCmdany-port118428129/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726593290754958293" to /tmp/TestFunctionalparallelMountCmdany-port118428129/001/test-1726593290754958293
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612770 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (447.593274ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 17:14 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 17:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 17:14 test-1726593290754958293
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh cat /mount-9p/test-1726593290754958293
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-612770 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b3edb81f-3ca6-4fae-8b54-fbb2b2b2e2e1] Pending
helpers_test.go:344: "busybox-mount" [b3edb81f-3ca6-4fae-8b54-fbb2b2b2e2e1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b3edb81f-3ca6-4fae-8b54-fbb2b2b2e2e1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b3edb81f-3ca6-4fae-8b54-fbb2b2b2e2e1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003578924s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-612770 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh stat /mount-9p/created-by-pod
E0917 17:14:59.602667    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612770 /tmp/TestFunctionalparallelMountCmdany-port118428129/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-612770 /tmp/TestFunctionalparallelMountCmdspecific-port116110821/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612770 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (611.489041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612770 /tmp/TestFunctionalparallelMountCmdspecific-port116110821/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612770 ssh "sudo umount -f /mount-9p": exit status 1 (383.106056ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-612770 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612770 /tmp/TestFunctionalparallelMountCmdspecific-port116110821/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-612770 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1328168206/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-612770 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1328168206/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-612770 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1328168206/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-612770 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612770 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1328168206/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612770 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1328168206/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-612770 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1328168206/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-612770 version -o=json --components: (1.151044385s)
--- PASS: TestFunctional/parallel/Version/components (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-612770 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-612770
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-612770
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-612770 image ls --format short --alsologtostderr:
I0917 17:15:15.441115   52067 out.go:345] Setting OutFile to fd 1 ...
I0917 17:15:15.441343   52067 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:15:15.441374   52067 out.go:358] Setting ErrFile to fd 2...
I0917 17:15:15.441409   52067 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:15:15.441684   52067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
I0917 17:15:15.442586   52067 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:15:15.442762   52067 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:15:15.443374   52067 cli_runner.go:164] Run: docker container inspect functional-612770 --format={{.State.Status}}
I0917 17:15:15.471233   52067 ssh_runner.go:195] Run: systemctl --version
I0917 17:15:15.471289   52067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-612770
I0917 17:15:15.503248   52067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/functional-612770/id_rsa Username:docker}
I0917 17:15:15.616768   52067 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-612770 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/kicbase/echo-server               | functional-612770 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-612770 | 10d6aef2c0031 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-612770 image ls --format table --alsologtostderr:
I0917 17:15:15.717363   52141 out.go:345] Setting OutFile to fd 1 ...
I0917 17:15:15.717542   52141 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:15:15.717555   52141 out.go:358] Setting ErrFile to fd 2...
I0917 17:15:15.717561   52141 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:15:15.717848   52141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
I0917 17:15:15.718655   52141 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:15:15.718830   52141 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:15:15.719494   52141 cli_runner.go:164] Run: docker container inspect functional-612770 --format={{.State.Status}}
I0917 17:15:15.744246   52141 ssh_runner.go:195] Run: systemctl --version
I0917 17:15:15.744307   52141 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-612770
I0917 17:15:15.777448   52141 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/functional-612770/id_rsa Username:docker}
I0917 17:15:15.876958   52141 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-612770 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13eb
d68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-612770"],"size":"4780000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],
"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"10d6aef2c0031b99932eb3d36fee4f31977f928dab436d6ec6a73e48496ea522","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-612770"],"size":"30"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"si
ze":"47000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-612770 image ls --format json --alsologtostderr:
I0917 17:15:15.991866   52221 out.go:345] Setting OutFile to fd 1 ...
I0917 17:15:15.992104   52221 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:15:15.992136   52221 out.go:358] Setting ErrFile to fd 2...
I0917 17:15:15.992167   52221 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:15:15.992503   52221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
I0917 17:15:15.993327   52221 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:15:15.993562   52221 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:15:15.994143   52221 cli_runner.go:164] Run: docker container inspect functional-612770 --format={{.State.Status}}
I0917 17:15:16.021452   52221 ssh_runner.go:195] Run: systemctl --version
I0917 17:15:16.021505   52221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-612770
I0917 17:15:16.058339   52221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/functional-612770/id_rsa Username:docker}
I0917 17:15:16.156874   52221 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-612770 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 10d6aef2c0031b99932eb3d36fee4f31977f928dab436d6ec6a73e48496ea522
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-612770
size: "30"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-612770
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-612770 image ls --format yaml --alsologtostderr:
I0917 17:15:15.441935   52068 out.go:345] Setting OutFile to fd 1 ...
I0917 17:15:15.442048   52068 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:15:15.442092   52068 out.go:358] Setting ErrFile to fd 2...
I0917 17:15:15.442105   52068 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:15:15.442364   52068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
I0917 17:15:15.443038   52068 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:15:15.443197   52068 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:15:15.443796   52068 cli_runner.go:164] Run: docker container inspect functional-612770 --format={{.State.Status}}
I0917 17:15:15.462038   52068 ssh_runner.go:195] Run: systemctl --version
I0917 17:15:15.462099   52068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-612770
I0917 17:15:15.487607   52068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/functional-612770/id_rsa Username:docker}
I0917 17:15:15.588172   52068 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-612770 ssh pgrep buildkitd: exit status 1 (346.221853ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image build -t localhost/my-image:functional-612770 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-612770 image build -t localhost/my-image:functional-612770 testdata/build --alsologtostderr: (2.724540818s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-612770 image build -t localhost/my-image:functional-612770 testdata/build --alsologtostderr:
I0917 17:15:16.037782   52227 out.go:345] Setting OutFile to fd 1 ...
I0917 17:15:16.037954   52227 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:15:16.037965   52227 out.go:358] Setting ErrFile to fd 2...
I0917 17:15:16.037972   52227 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:15:16.038260   52227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
I0917 17:15:16.038942   52227 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:15:16.042106   52227 config.go:182] Loaded profile config "functional-612770": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:15:16.042658   52227 cli_runner.go:164] Run: docker container inspect functional-612770 --format={{.State.Status}}
I0917 17:15:16.076074   52227 ssh_runner.go:195] Run: systemctl --version
I0917 17:15:16.076130   52227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-612770
I0917 17:15:16.101771   52227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/functional-612770/id_rsa Username:docker}
I0917 17:15:16.200738   52227 build_images.go:161] Building image from path: /tmp/build.1562373019.tar
I0917 17:15:16.200814   52227 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 17:15:16.210830   52227 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1562373019.tar
I0917 17:15:16.214803   52227 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1562373019.tar: stat -c "%s %y" /var/lib/minikube/build/build.1562373019.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1562373019.tar': No such file or directory
I0917 17:15:16.214831   52227 ssh_runner.go:362] scp /tmp/build.1562373019.tar --> /var/lib/minikube/build/build.1562373019.tar (3072 bytes)
I0917 17:15:16.244923   52227 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1562373019
I0917 17:15:16.254226   52227 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1562373019 -xf /var/lib/minikube/build/build.1562373019.tar
I0917 17:15:16.264257   52227 docker.go:360] Building image: /var/lib/minikube/build/build.1562373019
I0917 17:15:16.264337   52227 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-612770 /var/lib/minikube/build/build.1562373019
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:3bbe428c9561485959cb372a80f3a8b5802d2d431881b13b6804f2196d3937c0 done
#8 naming to localhost/my-image:functional-612770 done
#8 DONE 0.1s
I0917 17:15:18.662328   52227 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-612770 /var/lib/minikube/build/build.1562373019: (2.397962452s)
I0917 17:15:18.662401   52227 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1562373019
I0917 17:15:18.671815   52227 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1562373019.tar
I0917 17:15:18.681227   52227 build_images.go:217] Built localhost/my-image:functional-612770 from /tmp/build.1562373019.tar
I0917 17:15:18.681258   52227 build_images.go:133] succeeded building to: functional-612770
I0917 17:15:18.681264   52227 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/09/17 17:15:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (4.558578463s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-612770
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-612770 docker-env) && out/minikube-linux-arm64 status -p functional-612770"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-612770 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image load --daemon kicbase/echo-server:functional-612770 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-612770 image load --daemon kicbase/echo-server:functional-612770 --alsologtostderr: (1.003576214s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image load --daemon kicbase/echo-server:functional-612770 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-612770
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image load --daemon kicbase/echo-server:functional-612770 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image save kicbase/echo-server:functional-612770 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image rm kicbase/echo-server:functional-612770 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-612770
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-612770 image save --daemon kicbase/echo-server:functional-612770 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-612770
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-612770
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-612770
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-612770
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (136.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-943929 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0917 17:16:01.046418    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:17:22.968230    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-943929 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m15.652257853s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (136.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-943929 -- rollout status deployment/busybox: (5.73280733s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-4rfpf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-fwwhp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-m2sqf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-4rfpf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-fwwhp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-m2sqf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-4rfpf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-fwwhp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-m2sqf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-4rfpf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-4rfpf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-fwwhp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-fwwhp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-m2sqf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-943929 -- exec busybox-7dff88458-m2sqf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-943929 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-943929 -v=7 --alsologtostderr: (26.477211151s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr: (1.110930546s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-943929 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-943929 status --output json -v=7 --alsologtostderr: (1.007877417s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp testdata/cp-test.txt ha-943929:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4266173360/001/cp-test_ha-943929.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929:/home/docker/cp-test.txt ha-943929-m02:/home/docker/cp-test_ha-943929_ha-943929-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m02 "sudo cat /home/docker/cp-test_ha-943929_ha-943929-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929:/home/docker/cp-test.txt ha-943929-m03:/home/docker/cp-test_ha-943929_ha-943929-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m03 "sudo cat /home/docker/cp-test_ha-943929_ha-943929-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929:/home/docker/cp-test.txt ha-943929-m04:/home/docker/cp-test_ha-943929_ha-943929-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m04 "sudo cat /home/docker/cp-test_ha-943929_ha-943929-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp testdata/cp-test.txt ha-943929-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4266173360/001/cp-test_ha-943929-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m02:/home/docker/cp-test.txt ha-943929:/home/docker/cp-test_ha-943929-m02_ha-943929.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929 "sudo cat /home/docker/cp-test_ha-943929-m02_ha-943929.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m02:/home/docker/cp-test.txt ha-943929-m03:/home/docker/cp-test_ha-943929-m02_ha-943929-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m03 "sudo cat /home/docker/cp-test_ha-943929-m02_ha-943929-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m02:/home/docker/cp-test.txt ha-943929-m04:/home/docker/cp-test_ha-943929-m02_ha-943929-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m04 "sudo cat /home/docker/cp-test_ha-943929-m02_ha-943929-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp testdata/cp-test.txt ha-943929-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4266173360/001/cp-test_ha-943929-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m03:/home/docker/cp-test.txt ha-943929:/home/docker/cp-test_ha-943929-m03_ha-943929.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929 "sudo cat /home/docker/cp-test_ha-943929-m03_ha-943929.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m03:/home/docker/cp-test.txt ha-943929-m02:/home/docker/cp-test_ha-943929-m03_ha-943929-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m02 "sudo cat /home/docker/cp-test_ha-943929-m03_ha-943929-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m03:/home/docker/cp-test.txt ha-943929-m04:/home/docker/cp-test_ha-943929-m03_ha-943929-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m04 "sudo cat /home/docker/cp-test_ha-943929-m03_ha-943929-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp testdata/cp-test.txt ha-943929-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4266173360/001/cp-test_ha-943929-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m04:/home/docker/cp-test.txt ha-943929:/home/docker/cp-test_ha-943929-m04_ha-943929.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929 "sudo cat /home/docker/cp-test_ha-943929-m04_ha-943929.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m04:/home/docker/cp-test.txt ha-943929-m02:/home/docker/cp-test_ha-943929-m04_ha-943929-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m02 "sudo cat /home/docker/cp-test_ha-943929-m04_ha-943929-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 cp ha-943929-m04:/home/docker/cp-test.txt ha-943929-m03:/home/docker/cp-test_ha-943929-m04_ha-943929-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 ssh -n ha-943929-m03 "sudo cat /home/docker/cp-test_ha-943929-m04_ha-943929-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-943929 node stop m02 -v=7 --alsologtostderr: (10.959695986s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr: exit status 7 (784.001238ms)

                                                
                                                
-- stdout --
	ha-943929
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-943929-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-943929-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-943929-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:18:48.609054   74553 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:18:48.609251   74553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:18:48.609265   74553 out.go:358] Setting ErrFile to fd 2...
	I0917 17:18:48.609272   74553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:18:48.609568   74553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
	I0917 17:18:48.609831   74553 out.go:352] Setting JSON to false
	I0917 17:18:48.609894   74553 mustload.go:65] Loading cluster: ha-943929
	I0917 17:18:48.609949   74553 notify.go:220] Checking for updates...
	I0917 17:18:48.611350   74553 config.go:182] Loaded profile config "ha-943929": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:18:48.611379   74553 status.go:255] checking status of ha-943929 ...
	I0917 17:18:48.612034   74553 cli_runner.go:164] Run: docker container inspect ha-943929 --format={{.State.Status}}
	I0917 17:18:48.631974   74553 status.go:330] ha-943929 host status = "Running" (err=<nil>)
	I0917 17:18:48.632015   74553 host.go:66] Checking if "ha-943929" exists ...
	I0917 17:18:48.632480   74553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-943929
	I0917 17:18:48.671365   74553 host.go:66] Checking if "ha-943929" exists ...
	I0917 17:18:48.671796   74553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:18:48.671858   74553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-943929
	I0917 17:18:48.698014   74553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/ha-943929/id_rsa Username:docker}
	I0917 17:18:48.797506   74553 ssh_runner.go:195] Run: systemctl --version
	I0917 17:18:48.805998   74553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:18:48.821683   74553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:18:48.885550   74553 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-17 17:18:48.875241637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:18:48.886125   74553 kubeconfig.go:125] found "ha-943929" server: "https://192.168.49.254:8443"
	I0917 17:18:48.886154   74553 api_server.go:166] Checking apiserver status ...
	I0917 17:18:48.886206   74553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:18:48.898844   74553 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2282/cgroup
	I0917 17:18:48.908892   74553 api_server.go:182] apiserver freezer: "5:freezer:/docker/b354429436738af67fb7e123bff843501c865e12f44f54d4b445f8d0d8e117e8/kubepods/burstable/pod9445d8c927389a1a74d282fb60f3ef64/33f5205d58b96efc2cc61230cdb21e3ce77e112e7b7824f64eba22293c2d348d"
	I0917 17:18:48.908962   74553 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b354429436738af67fb7e123bff843501c865e12f44f54d4b445f8d0d8e117e8/kubepods/burstable/pod9445d8c927389a1a74d282fb60f3ef64/33f5205d58b96efc2cc61230cdb21e3ce77e112e7b7824f64eba22293c2d348d/freezer.state
	I0917 17:18:48.917684   74553 api_server.go:204] freezer state: "THAWED"
	I0917 17:18:48.917718   74553 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 17:18:48.927104   74553 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 17:18:48.927138   74553 status.go:422] ha-943929 apiserver status = Running (err=<nil>)
	I0917 17:18:48.927150   74553 status.go:257] ha-943929 status: &{Name:ha-943929 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:18:48.927168   74553 status.go:255] checking status of ha-943929-m02 ...
	I0917 17:18:48.927531   74553 cli_runner.go:164] Run: docker container inspect ha-943929-m02 --format={{.State.Status}}
	I0917 17:18:48.945197   74553 status.go:330] ha-943929-m02 host status = "Stopped" (err=<nil>)
	I0917 17:18:48.945222   74553 status.go:343] host is not running, skipping remaining checks
	I0917 17:18:48.945229   74553 status.go:257] ha-943929-m02 status: &{Name:ha-943929-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:18:48.945251   74553 status.go:255] checking status of ha-943929-m03 ...
	I0917 17:18:48.945568   74553 cli_runner.go:164] Run: docker container inspect ha-943929-m03 --format={{.State.Status}}
	I0917 17:18:48.965034   74553 status.go:330] ha-943929-m03 host status = "Running" (err=<nil>)
	I0917 17:18:48.965078   74553 host.go:66] Checking if "ha-943929-m03" exists ...
	I0917 17:18:48.965383   74553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-943929-m03
	I0917 17:18:48.984856   74553 host.go:66] Checking if "ha-943929-m03" exists ...
	I0917 17:18:48.985188   74553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:18:48.985242   74553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-943929-m03
	I0917 17:18:49.004602   74553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/ha-943929-m03/id_rsa Username:docker}
	I0917 17:18:49.105452   74553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:18:49.118860   74553 kubeconfig.go:125] found "ha-943929" server: "https://192.168.49.254:8443"
	I0917 17:18:49.118892   74553 api_server.go:166] Checking apiserver status ...
	I0917 17:18:49.118938   74553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:18:49.132283   74553 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2126/cgroup
	I0917 17:18:49.142282   74553 api_server.go:182] apiserver freezer: "5:freezer:/docker/18b17cd9cdca347f5cf42381bda8eaf35beab786bdf9e8ba640e0c8d2326b449/kubepods/burstable/pod5dd0d9ca1c543c604f245abbd3712283/df51f9a62a9a0141631057cdbac4cf84fbfcc988f0ba9858e7135e9c1f5156b5"
	I0917 17:18:49.142452   74553 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/18b17cd9cdca347f5cf42381bda8eaf35beab786bdf9e8ba640e0c8d2326b449/kubepods/burstable/pod5dd0d9ca1c543c604f245abbd3712283/df51f9a62a9a0141631057cdbac4cf84fbfcc988f0ba9858e7135e9c1f5156b5/freezer.state
	I0917 17:18:49.152340   74553 api_server.go:204] freezer state: "THAWED"
	I0917 17:18:49.152371   74553 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 17:18:49.160304   74553 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 17:18:49.160333   74553 status.go:422] ha-943929-m03 apiserver status = Running (err=<nil>)
	I0917 17:18:49.160343   74553 status.go:257] ha-943929-m03 status: &{Name:ha-943929-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:18:49.160360   74553 status.go:255] checking status of ha-943929-m04 ...
	I0917 17:18:49.160663   74553 cli_runner.go:164] Run: docker container inspect ha-943929-m04 --format={{.State.Status}}
	I0917 17:18:49.178229   74553 status.go:330] ha-943929-m04 host status = "Running" (err=<nil>)
	I0917 17:18:49.178259   74553 host.go:66] Checking if "ha-943929-m04" exists ...
	I0917 17:18:49.178568   74553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-943929-m04
	I0917 17:18:49.196934   74553 host.go:66] Checking if "ha-943929-m04" exists ...
	I0917 17:18:49.197303   74553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:18:49.197351   74553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-943929-m04
	I0917 17:18:49.216880   74553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/ha-943929-m04/id_rsa Username:docker}
	I0917 17:18:49.318396   74553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:18:49.335903   74553 status.go:257] ha-943929-m04 status: &{Name:ha-943929-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (59.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 node start m02 -v=7 --alsologtostderr
E0917 17:19:20.277567    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:20.284065    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:20.295550    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:20.317089    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:20.358493    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:20.440137    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:20.602273    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:20.923863    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:21.566035    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:22.847514    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:25.409603    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:30.531375    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:39.104663    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:19:40.773314    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-943929 node start m02 -v=7 --alsologtostderr: (58.070294508s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr: (1.02094119s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (59.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (186.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-943929 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-943929 -v=7 --alsologtostderr
E0917 17:20:01.254980    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:20:06.809514    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-943929 -v=7 --alsologtostderr: (34.552454438s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-943929 --wait=true -v=7 --alsologtostderr
E0917 17:20:42.218141    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:22:04.140110    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-943929 --wait=true -v=7 --alsologtostderr: (2m31.325766014s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-943929
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (186.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-943929 node delete m03 -v=7 --alsologtostderr: (10.714551434s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-943929 stop -v=7 --alsologtostderr: (33.079666231s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr: exit status 7 (108.872267ms)

                                                
                                                
-- stdout --
	ha-943929
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-943929-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-943929-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:23:41.302522  101278 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:23:41.302726  101278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:23:41.302754  101278 out.go:358] Setting ErrFile to fd 2...
	I0917 17:23:41.302775  101278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:23:41.303079  101278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
	I0917 17:23:41.303307  101278 out.go:352] Setting JSON to false
	I0917 17:23:41.303382  101278 mustload.go:65] Loading cluster: ha-943929
	I0917 17:23:41.303412  101278 notify.go:220] Checking for updates...
	I0917 17:23:41.303893  101278 config.go:182] Loaded profile config "ha-943929": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:23:41.303912  101278 status.go:255] checking status of ha-943929 ...
	I0917 17:23:41.304478  101278 cli_runner.go:164] Run: docker container inspect ha-943929 --format={{.State.Status}}
	I0917 17:23:41.324036  101278 status.go:330] ha-943929 host status = "Stopped" (err=<nil>)
	I0917 17:23:41.324057  101278 status.go:343] host is not running, skipping remaining checks
	I0917 17:23:41.324064  101278 status.go:257] ha-943929 status: &{Name:ha-943929 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:23:41.324093  101278 status.go:255] checking status of ha-943929-m02 ...
	I0917 17:23:41.324429  101278 cli_runner.go:164] Run: docker container inspect ha-943929-m02 --format={{.State.Status}}
	I0917 17:23:41.349707  101278 status.go:330] ha-943929-m02 host status = "Stopped" (err=<nil>)
	I0917 17:23:41.349726  101278 status.go:343] host is not running, skipping remaining checks
	I0917 17:23:41.349733  101278 status.go:257] ha-943929-m02 status: &{Name:ha-943929-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:23:41.349752  101278 status.go:255] checking status of ha-943929-m04 ...
	I0917 17:23:41.350037  101278 cli_runner.go:164] Run: docker container inspect ha-943929-m04 --format={{.State.Status}}
	I0917 17:23:41.367322  101278 status.go:330] ha-943929-m04 host status = "Stopped" (err=<nil>)
	I0917 17:23:41.367343  101278 status.go:343] host is not running, skipping remaining checks
	I0917 17:23:41.367351  101278 status.go:257] ha-943929-m04 status: &{Name:ha-943929-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (153.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-943929 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0917 17:24:20.277203    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:24:39.104393    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:24:47.981713    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-943929 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m32.248234891s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (153.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (48.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-943929 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-943929 --control-plane -v=7 --alsologtostderr: (47.175652253s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-943929 status -v=7 --alsologtostderr: (1.067053127s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (48.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (35.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-817111 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-817111 --driver=docker  --container-runtime=docker: (35.309612934s)
--- PASS: TestImageBuild/serial/Setup (35.31s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-817111
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-817111: (2.066074032s)
--- PASS: TestImageBuild/serial/NormalBuild (2.07s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-817111
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-817111: (1.057363974s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.06s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-817111
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-817111
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.18s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-521377 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-521377 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (43.17401473s)
--- PASS: TestJSONOutput/start/Command (43.18s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-521377 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-521377 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-521377 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-521377 --output=json --user=testUser: (10.941948235s)
--- PASS: TestJSONOutput/stop/Command (10.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-424808 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-424808 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (69.776815ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4427bf7-5995-450b-bee8-df401650246b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-424808] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"13fa37c2-ec13-4526-a126-54e8964a5d5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"825fa0cf-7f51-4291-85d4-0db244bfda21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f1837c8d-1741-420d-9a50-c69998b983ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig"}}
	{"specversion":"1.0","id":"a46b18fa-c6eb-4ba6-9f75-e6c30a5d1c4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube"}}
	{"specversion":"1.0","id":"d4385fb5-7f16-4610-841a-6c5740e8f12f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2542a44c-24c8-4a4e-8329-e872506550be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8a661eb9-3310-4d32-91b1-3456e304abee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-424808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-424808
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-346018 --network=
E0917 17:29:20.277377    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-346018 --network=: (32.32672173s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-346018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-346018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-346018: (2.136885961s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.49s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-868750 --network=bridge
E0917 17:29:39.104740    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-868750 --network=bridge: (32.608733233s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-868750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-868750
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-868750: (2.61812165s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.32s)

                                                
                                    
x
+
TestKicExistingNetwork (32.08s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-705067 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-705067 --network=existing-network: (29.893153981s)
helpers_test.go:175: Cleaning up "existing-network-705067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-705067
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-705067: (2.031544145s)
--- PASS: TestKicExistingNetwork (32.08s)

                                                
                                    
x
+
TestKicCustomSubnet (39.34s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-906185 --subnet=192.168.60.0/24
E0917 17:31:02.171838    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-906185 --subnet=192.168.60.0/24: (37.254334633s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-906185 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-906185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-906185
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-906185: (2.049327303s)
--- PASS: TestKicCustomSubnet (39.34s)

                                                
                                    
x
+
TestKicStaticIP (34.53s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-846881 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-846881 --static-ip=192.168.200.200: (32.32002351s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-846881 ip
helpers_test.go:175: Cleaning up "static-ip-846881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-846881
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-846881: (2.056255554s)
--- PASS: TestKicStaticIP (34.53s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-756255 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-756255 --driver=docker  --container-runtime=docker: (30.273408839s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-758897 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-758897 --driver=docker  --container-runtime=docker: (33.731118708s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-756255
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-758897
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-758897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-758897
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-758897: (2.176241372s)
helpers_test.go:175: Cleaning up "first-756255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-756255
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-756255: (2.13410181s)
--- PASS: TestMinikubeProfile (69.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (11.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-518267 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-518267 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (10.048536832s)
--- PASS: TestMountStart/serial/StartWithMountFirst (11.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-518267 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-520154 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-520154 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.361401597s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-520154 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-518267 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-518267 --alsologtostderr -v=5: (1.463964655s)
--- PASS: TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-520154 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-520154
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-520154: (1.217157532s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-520154
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-520154: (7.850928692s)
--- PASS: TestMountStart/serial/RestartStopped (8.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-520154 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (84.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-915105 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0917 17:34:20.277718    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:34:39.104149    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-915105 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.342094871s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (84.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (56.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-915105 -- rollout status deployment/busybox: (5.051591511s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0917 17:35:43.343088    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- exec busybox-7dff88458-9pqpp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- exec busybox-7dff88458-ckk67 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- exec busybox-7dff88458-9pqpp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- exec busybox-7dff88458-ckk67 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- exec busybox-7dff88458-9pqpp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- exec busybox-7dff88458-ckk67 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (56.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- exec busybox-7dff88458-9pqpp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- exec busybox-7dff88458-9pqpp -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- exec busybox-7dff88458-ckk67 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-915105 -- exec busybox-7dff88458-ckk67 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-915105 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-915105 -v 3 --alsologtostderr: (18.329764734s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.16s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-915105 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp testdata/cp-test.txt multinode-915105:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp multinode-915105:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2220764571/001/cp-test_multinode-915105.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp multinode-915105:/home/docker/cp-test.txt multinode-915105-m02:/home/docker/cp-test_multinode-915105_multinode-915105-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m02 "sudo cat /home/docker/cp-test_multinode-915105_multinode-915105-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp multinode-915105:/home/docker/cp-test.txt multinode-915105-m03:/home/docker/cp-test_multinode-915105_multinode-915105-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m03 "sudo cat /home/docker/cp-test_multinode-915105_multinode-915105-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp testdata/cp-test.txt multinode-915105-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp multinode-915105-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2220764571/001/cp-test_multinode-915105-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp multinode-915105-m02:/home/docker/cp-test.txt multinode-915105:/home/docker/cp-test_multinode-915105-m02_multinode-915105.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105 "sudo cat /home/docker/cp-test_multinode-915105-m02_multinode-915105.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp multinode-915105-m02:/home/docker/cp-test.txt multinode-915105-m03:/home/docker/cp-test_multinode-915105-m02_multinode-915105-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m03 "sudo cat /home/docker/cp-test_multinode-915105-m02_multinode-915105-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp testdata/cp-test.txt multinode-915105-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp multinode-915105-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2220764571/001/cp-test_multinode-915105-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp multinode-915105-m03:/home/docker/cp-test.txt multinode-915105:/home/docker/cp-test_multinode-915105-m03_multinode-915105.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105 "sudo cat /home/docker/cp-test_multinode-915105-m03_multinode-915105.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 cp multinode-915105-m03:/home/docker/cp-test.txt multinode-915105-m02:/home/docker/cp-test_multinode-915105-m03_multinode-915105-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 ssh -n multinode-915105-m02 "sudo cat /home/docker/cp-test_multinode-915105-m03_multinode-915105-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-915105 node stop m03: (1.233839438s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-915105 status: exit status 7 (530.387373ms)

                                                
                                                
-- stdout --
	multinode-915105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-915105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-915105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-915105 status --alsologtostderr: exit status 7 (546.59118ms)

                                                
                                                
-- stdout --
	multinode-915105
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-915105-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-915105-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:36:26.098744  177157 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:36:26.099203  177157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:36:26.099219  177157 out.go:358] Setting ErrFile to fd 2...
	I0917 17:36:26.099227  177157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:36:26.099556  177157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
	I0917 17:36:26.099835  177157 out.go:352] Setting JSON to false
	I0917 17:36:26.099900  177157 mustload.go:65] Loading cluster: multinode-915105
	I0917 17:36:26.099977  177157 notify.go:220] Checking for updates...
	I0917 17:36:26.101052  177157 config.go:182] Loaded profile config "multinode-915105": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:36:26.101085  177157 status.go:255] checking status of multinode-915105 ...
	I0917 17:36:26.101722  177157 cli_runner.go:164] Run: docker container inspect multinode-915105 --format={{.State.Status}}
	I0917 17:36:26.120493  177157 status.go:330] multinode-915105 host status = "Running" (err=<nil>)
	I0917 17:36:26.120521  177157 host.go:66] Checking if "multinode-915105" exists ...
	I0917 17:36:26.120850  177157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-915105
	I0917 17:36:26.150615  177157 host.go:66] Checking if "multinode-915105" exists ...
	I0917 17:36:26.150919  177157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:36:26.150980  177157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-915105
	I0917 17:36:26.170378  177157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/multinode-915105/id_rsa Username:docker}
	I0917 17:36:26.269132  177157 ssh_runner.go:195] Run: systemctl --version
	I0917 17:36:26.273337  177157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:36:26.286410  177157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:36:26.345989  177157 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-17 17:36:26.335722462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:36:26.346588  177157 kubeconfig.go:125] found "multinode-915105" server: "https://192.168.67.2:8443"
	I0917 17:36:26.346625  177157 api_server.go:166] Checking apiserver status ...
	I0917 17:36:26.346795  177157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:36:26.359500  177157 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2248/cgroup
	I0917 17:36:26.370945  177157 api_server.go:182] apiserver freezer: "5:freezer:/docker/397130042d06863e9c0aacb46fefcdd5e1d46863a3d4b23e3a834faaf69c223d/kubepods/burstable/podf3a3ad17c76e2278373f72f16e9f451f/3279e0d9be5bbb63d2cf6d35d049a595298515048d043145777f565d4b9b70e2"
	I0917 17:36:26.371037  177157 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/397130042d06863e9c0aacb46fefcdd5e1d46863a3d4b23e3a834faaf69c223d/kubepods/burstable/podf3a3ad17c76e2278373f72f16e9f451f/3279e0d9be5bbb63d2cf6d35d049a595298515048d043145777f565d4b9b70e2/freezer.state
	I0917 17:36:26.382058  177157 api_server.go:204] freezer state: "THAWED"
	I0917 17:36:26.382087  177157 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0917 17:36:26.390666  177157 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0917 17:36:26.390713  177157 status.go:422] multinode-915105 apiserver status = Running (err=<nil>)
	I0917 17:36:26.390741  177157 status.go:257] multinode-915105 status: &{Name:multinode-915105 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:36:26.390786  177157 status.go:255] checking status of multinode-915105-m02 ...
	I0917 17:36:26.391164  177157 cli_runner.go:164] Run: docker container inspect multinode-915105-m02 --format={{.State.Status}}
	I0917 17:36:26.408366  177157 status.go:330] multinode-915105-m02 host status = "Running" (err=<nil>)
	I0917 17:36:26.408395  177157 host.go:66] Checking if "multinode-915105-m02" exists ...
	I0917 17:36:26.408701  177157 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-915105-m02
	I0917 17:36:26.425348  177157 host.go:66] Checking if "multinode-915105-m02" exists ...
	I0917 17:36:26.425670  177157 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:36:26.425714  177157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-915105-m02
	I0917 17:36:26.447132  177157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19662-2253/.minikube/machines/multinode-915105-m02/id_rsa Username:docker}
	I0917 17:36:26.549679  177157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:36:26.562728  177157 status.go:257] multinode-915105-m02 status: &{Name:multinode-915105-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:36:26.562761  177157 status.go:255] checking status of multinode-915105-m03 ...
	I0917 17:36:26.563073  177157 cli_runner.go:164] Run: docker container inspect multinode-915105-m03 --format={{.State.Status}}
	I0917 17:36:26.580145  177157 status.go:330] multinode-915105-m03 host status = "Stopped" (err=<nil>)
	I0917 17:36:26.580167  177157 status.go:343] host is not running, skipping remaining checks
	I0917 17:36:26.580175  177157 status.go:257] multinode-915105-m03 status: &{Name:multinode-915105-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-915105 node start m03 -v=7 --alsologtostderr: (10.35060754s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (107.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-915105
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-915105
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-915105: (22.979756696s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-915105 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-915105 --wait=true -v=8 --alsologtostderr: (1m24.730656294s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-915105
--- PASS: TestMultiNode/serial/RestartKeepsNodes (107.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-915105 node delete m03: (5.027851372s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-915105 stop: (21.632354558s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-915105 status: exit status 7 (100.385643ms)

                                                
                                                
-- stdout --
	multinode-915105
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-915105-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-915105 status --alsologtostderr: exit status 7 (85.961123ms)

                                                
                                                
-- stdout --
	multinode-915105
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-915105-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:38:53.088995  190724 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:38:53.089157  190724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:38:53.089171  190724 out.go:358] Setting ErrFile to fd 2...
	I0917 17:38:53.089178  190724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:38:53.089467  190724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-2253/.minikube/bin
	I0917 17:38:53.089666  190724 out.go:352] Setting JSON to false
	I0917 17:38:53.089702  190724 mustload.go:65] Loading cluster: multinode-915105
	I0917 17:38:53.089777  190724 notify.go:220] Checking for updates...
	I0917 17:38:53.090128  190724 config.go:182] Loaded profile config "multinode-915105": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:38:53.090151  190724 status.go:255] checking status of multinode-915105 ...
	I0917 17:38:53.090787  190724 cli_runner.go:164] Run: docker container inspect multinode-915105 --format={{.State.Status}}
	I0917 17:38:53.110711  190724 status.go:330] multinode-915105 host status = "Stopped" (err=<nil>)
	I0917 17:38:53.110731  190724 status.go:343] host is not running, skipping remaining checks
	I0917 17:38:53.110740  190724 status.go:257] multinode-915105 status: &{Name:multinode-915105 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:38:53.110770  190724 status.go:255] checking status of multinode-915105-m02 ...
	I0917 17:38:53.111086  190724 cli_runner.go:164] Run: docker container inspect multinode-915105-m02 --format={{.State.Status}}
	I0917 17:38:53.127627  190724 status.go:330] multinode-915105-m02 host status = "Stopped" (err=<nil>)
	I0917 17:38:53.127648  190724 status.go:343] host is not running, skipping remaining checks
	I0917 17:38:53.127657  190724 status.go:257] multinode-915105-m02 status: &{Name:multinode-915105-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-915105 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0917 17:39:20.278151    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:39:39.104651    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-915105 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (56.704819838s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-915105 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.40s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-915105
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-915105-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-915105-m02 --driver=docker  --container-runtime=docker: exit status 14 (97.083864ms)

                                                
                                                
-- stdout --
	* [multinode-915105-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-915105-m02' is duplicated with machine name 'multinode-915105-m02' in profile 'multinode-915105'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-915105-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-915105-m03 --driver=docker  --container-runtime=docker: (33.66104622s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-915105
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-915105: exit status 80 (334.345916ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-915105 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-915105-m03 already exists in multinode-915105-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-915105-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-915105-m03: (2.033138044s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.18s)

                                                
                                    
x
+
TestPreload (141.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-256692 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-256692 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m41.856178197s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-256692 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-256692 image pull gcr.io/k8s-minikube/busybox: (2.081641946s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-256692
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-256692: (10.877032291s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-256692 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-256692 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (23.898605807s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-256692 image list
helpers_test.go:175: Cleaning up "test-preload-256692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-256692
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-256692: (2.365543278s)
--- PASS: TestPreload (141.39s)

                                                
                                    
x
+
TestScheduledStopUnix (107.34s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-819582 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-819582 --memory=2048 --driver=docker  --container-runtime=docker: (34.088334117s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-819582 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-819582 -n scheduled-stop-819582
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-819582 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-819582 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-819582 -n scheduled-stop-819582
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-819582
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-819582 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0917 17:44:20.277230    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-819582
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-819582: exit status 7 (66.477343ms)

                                                
                                                
-- stdout --
	scheduled-stop-819582
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-819582 -n scheduled-stop-819582
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-819582 -n scheduled-stop-819582: exit status 7 (67.94232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-819582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-819582
E0917 17:44:39.104727    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-819582: (1.655575241s)
--- PASS: TestScheduledStopUnix (107.34s)

                                                
                                    
x
+
TestSkaffold (120.39s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4223563157 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-722887 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-722887 --memory=2600 --driver=docker  --container-runtime=docker: (33.450114108s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4223563157 run --minikube-profile skaffold-722887 --kube-context skaffold-722887 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4223563157 run --minikube-profile skaffold-722887 --kube-context skaffold-722887 --status-check=true --port-forward=false --interactive=false: (1m11.517590262s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5c4448b5-cmw5m" [17c7a676-ac8f-4d36-90b6-9b1ec283ac4d] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004114212s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6fb77b599d-64khf" [46e50245-1d01-4f3e-9bf0-d2a413a9122d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003201328s
helpers_test.go:175: Cleaning up "skaffold-722887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-722887
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-722887: (2.910397088s)
--- PASS: TestSkaffold (120.39s)

                                                
                                    
x
+
TestInsufficientStorage (11.66s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-453761 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-453761 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.349138727s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d54712fd-69bc-4e6f-a6a2-79437659c9a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-453761] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1ef4cf7-b131-4ac3-bcbb-601aed172d81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"eca22b7d-d1ca-4d46-a8f1-8fdf3e39d734","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d2268aff-2ba1-40bc-8b56-c425b01f49f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig"}}
	{"specversion":"1.0","id":"824c8eb9-8105-4732-80ac-f8a69360a43a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube"}}
	{"specversion":"1.0","id":"5fd43361-7da4-486d-8fd1-299614ca6adf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f84aff12-cf85-4cd4-9f57-1092436e9734","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5cc13ba1-5f1e-46a5-8ef8-7a8d51fe2cbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1c2f5ab4-a813-4f21-8086-6696e525f7dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3e104a0a-b0a3-4538-b9c0-129e40a10533","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a02d0226-0f80-443d-91ac-3a3e76f3c68f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8fda3ce1-e65d-4e38-a2df-4670892dbb29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-453761\" primary control-plane node in \"insufficient-storage-453761\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d8f0b03-295f-4548-bb1f-9db761d86d65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb2a9046-6846-4971-810d-1939675b08c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"db858cd9-908e-478f-92db-933280c1fb00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-453761 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-453761 --output=json --layout=cluster: exit status 7 (299.103973ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-453761","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-453761","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 17:46:49.489721  225036 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-453761" does not appear in /home/jenkins/minikube-integration/19662-2253/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-453761 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-453761 --output=json --layout=cluster: exit status 7 (282.297774ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-453761","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-453761","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 17:46:49.771738  225098 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-453761" does not appear in /home/jenkins/minikube-integration/19662-2253/kubeconfig
	E0917 17:46:49.782155  225098 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/insufficient-storage-453761/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-453761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-453761
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-453761: (1.725558972s)
--- PASS: TestInsufficientStorage (11.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (135.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.516540995 start -p running-upgrade-616688 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0917 17:49:39.105099    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.516540995 start -p running-upgrade-616688 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m26.17991868s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-616688 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0917 17:51:25.919884    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:25.926297    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:25.937811    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:25.959426    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:26.000859    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:26.087942    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:26.250751    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:26.572321    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:27.213618    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:28.494840    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:51:31.057063    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-616688 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.470376803s)
helpers_test.go:175: Cleaning up "running-upgrade-616688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-616688
E0917 17:51:36.179030    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-616688: (2.216514728s)
--- PASS: TestRunningBinaryUpgrade (135.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (130.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-858930 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0917 17:52:47.865353    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-858930 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m2.681149193s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-858930
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-858930: (1.537034799s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-858930 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-858930 status --format={{.Host}}: exit status 7 (69.58986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-858930 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-858930 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.675928797s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-858930 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-858930 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-858930 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (96.869924ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-858930] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-858930
	    minikube start -p kubernetes-upgrade-858930 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8589302 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-858930 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-858930 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0917 17:54:39.104721    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-858930 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.767834678s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-858930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-858930
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-858930: (2.579370373s)
--- PASS: TestKubernetesUpgrade (130.51s)

                                                
                                    
x
+
TestMissingContainerUpgrade (133.23s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1855054375 start -p missing-upgrade-482973 --memory=2200 --driver=docker  --container-runtime=docker
E0917 17:51:46.421445    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:52:06.903812    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1855054375 start -p missing-upgrade-482973 --memory=2200 --driver=docker  --container-runtime=docker: (49.601271086s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-482973
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-482973: (10.6571014s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-482973
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-482973 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-482973 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m9.421110579s)
helpers_test.go:175: Cleaning up "missing-upgrade-482973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-482973
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-482973: (2.701841828s)
--- PASS: TestMissingContainerUpgrade (133.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (96.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2919091327 start -p stopped-upgrade-001546 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0917 17:54:09.787548    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:54:20.278218    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2919091327 start -p stopped-upgrade-001546 --memory=2200 --vm-driver=docker  --container-runtime=docker: (56.232006313s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2919091327 -p stopped-upgrade-001546 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2919091327 -p stopped-upgrade-001546 stop: (2.323908279s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-001546 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-001546 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.593401438s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (96.15s)

                                                
                                    
x
+
TestPause/serial/Start (56.1s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-106026 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-106026 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (56.101722828s)
--- PASS: TestPause/serial/Start (56.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-001546
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-001546: (2.434177647s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-737571 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-737571 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (171.74201ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-737571] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-2253/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-2253/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-737571 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-737571 --driver=docker  --container-runtime=docker: (39.076929738s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-737571 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.63s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.14s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-106026 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-106026 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.121607798s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-737571 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-737571 --no-kubernetes --driver=docker  --container-runtime=docker: (15.10378278s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-737571 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-737571 status -o json: exit status 2 (397.547807ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-737571","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-737571
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-737571: (1.850046518s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.35s)

                                                
                                    
x
+
TestPause/serial/Pause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-106026 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.62s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-106026 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-106026 --output=json --layout=cluster: exit status 2 (316.752053ms)

                                                
                                                
-- stdout --
	{"Name":"pause-106026","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-106026","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-106026 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.51s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.7s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-106026 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.70s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.35s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-106026 --alsologtostderr -v=5
E0917 17:56:25.919130    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-106026 --alsologtostderr -v=5: (2.349429543s)
--- PASS: TestPause/serial/DeletePaused (2.35s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-106026
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-106026: exit status 1 (27.918624ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-106026: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m28.312884887s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (12.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-737571 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-737571 --no-kubernetes --driver=docker  --container-runtime=docker: (12.993100456s)
--- PASS: TestNoKubernetes/serial/Start (12.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-737571 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-737571 "sudo systemctl is-active --quiet service kubelet": exit status 1 (392.697169ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-737571
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-737571: (1.291779045s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-737571 --driver=docker  --container-runtime=docker
E0917 17:56:53.628862    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-737571 --driver=docker  --container-runtime=docker: (8.294494494s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-737571 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-737571 "sudo systemctl is-active --quiet service kubelet": exit status 1 (315.3614ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m12.50484302s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-319495 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-319495 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tgznm" [dc4ce1a3-c4ff-438d-8896-41598c04abe5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tgznm" [dc4ce1a3-c4ff-438d-8896-41598c04abe5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00493237s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-319495 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4b558" [6b455e59-773d-42de-82fc-d0803af48584] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005804758s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-319495 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-319495 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ngfbf" [e0cda950-6bc4-4016-af7d-bb1ed418527a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ngfbf" [e0cda950-6bc4-4016-af7d-bb1ed418527a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004038931s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-319495 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (86.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m26.251541393s)
--- PASS: TestNetworkPlugins/group/calico/Start (86.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0917 17:59:20.277667    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:39.104486    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m5.730959037s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-957kf" [bf0d9fa7-3e47-49e1-b8d6-c32732b2d22a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.031728771s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-319495 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-319495 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mvbtn" [c5dbdfba-4a26-4794-8c0b-dfcd11a99876] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mvbtn" [c5dbdfba-4a26-4794-8c0b-dfcd11a99876] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.00399158s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-319495 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-319495 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tl78b" [98ea790e-e495-464d-aeec-27c6fb3438e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tl78b" [98ea790e-e495-464d-aeec-27c6fb3438e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003413151s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-319495 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-319495 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (55.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (55.602849266s)
--- PASS: TestNetworkPlugins/group/false/Start (55.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0917 18:01:25.919339    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m27.782265823s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-319495 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-319495 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jgn7l" [02321719-d1cb-4f2c-8dfb-364a0e015a3e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jgn7l" [02321719-d1cb-4f2c-8dfb-364a0e015a3e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003638949s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-319495 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m0.506211921s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-319495 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-319495 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pkckh" [9a19b1fc-631a-4060-819b-130bd759c148] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pkckh" [9a19b1fc-631a-4060-819b-130bd759c148] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.005718682s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-319495 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0917 18:02:56.782597    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:02:56.789048    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:02:56.800391    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:02:56.821703    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:02:56.863110    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:02:56.945408    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:02:57.106894    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:02:57.428109    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:02:58.069778    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:02:59.351585    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:01.912958    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:07.034809    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:08.427967    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (47.371965347s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
E0917 18:03:08.435680    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:08.451444    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kube-flannel-ds-6m5h9" [24484d71-79de-43b1-951d-26f9d8e3adc6] Running
E0917 18:03:08.474010    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:08.515896    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:08.597236    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:08.758571    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:09.080186    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:09.721867    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:11.004230    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:13.565535    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004254854s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-319495 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-319495 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fvdlz" [2d5c807d-3b8c-44d9-a603-e6e1f3717728] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 18:03:17.276634    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:03:18.687384    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-fvdlz" [2d5c807d-3b8c-44d9-a603-e6e1f3717728] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004163168s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-319495 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-319495 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-319495 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l749t" [2e32f71a-c16d-4c3d-a1c3-d1d7a5ecd805] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 18:03:37.758834    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-l749t" [2e32f71a-c16d-4c3d-a1c3-d1d7a5ecd805] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004395846s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-319495 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (77.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-319495 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m17.758620254s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (77.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (166.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-327219 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0917 18:04:18.720798    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:20.277709    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:22.175718    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:30.373329    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:39.104829    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:56.722240    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:56.728946    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:56.740272    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:56.761597    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:56.803062    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:56.885265    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:57.046686    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:57.368269    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:58.010285    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:58.377203    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:58.383589    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:58.394958    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:58.416341    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:58.458036    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:58.539464    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:58.700846    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:59.022378    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:59.292138    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:04:59.664633    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:05:00.946768    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:05:01.854211    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:05:03.508362    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:05:06.975982    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:05:08.630086    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-327219 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m46.907299986s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (166.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-319495 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-319495 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-smqdd" [17fc40dc-35a2-4978-8072-ce9768b566a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-smqdd" [17fc40dc-35a2-4978-8072-ce9768b566a7] Running
E0917 18:05:17.218139    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:05:18.871846    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.003941534s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-319495 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-319495 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.26s)
E0917 18:17:18.334947    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-201228 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 18:05:52.294935    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:18.661707    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:20.315323    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:25.919438    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:35.719519    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:35.726167    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:35.737642    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:35.759131    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:35.800485    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:35.881903    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:36.043626    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:36.365526    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:37.013254    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-201228 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (53.142047899s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-201228 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d0893071-30e3-4f17-b4c5-3d8e59ad508e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0917 18:06:38.294552    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:06:40.856860    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [d0893071-30e3-4f17-b4c5-3d8e59ad508e] Running
E0917 18:06:45.978415    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003938331s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-201228 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-201228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-201228 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.052682862s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-201228 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-201228 --alsologtostderr -v=3
E0917 18:06:56.219915    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-201228 --alsologtostderr -v=3: (11.006498234s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-201228 -n no-preload-201228
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-201228 -n no-preload-201228: exit status 7 (89.555166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-201228 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (270.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-201228 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-201228 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m29.761499284s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-201228 -n no-preload-201228
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (270.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-327219 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [96d8d12a-2338-40f6-b8ad-3c18c70b9a2b] Pending
helpers_test.go:344: "busybox" [96d8d12a-2338-40f6-b8ad-3c18c70b9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [96d8d12a-2338-40f6-b8ad-3c18c70b9a2b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.008655095s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-327219 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-327219 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0917 18:07:10.464298    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:10.470657    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:10.482606    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:10.503930    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:10.546155    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:10.628995    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:10.790436    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:11.112185    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-327219 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.328049462s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-327219 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-327219 --alsologtostderr -v=3
E0917 18:07:11.753779    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:13.035072    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:15.597502    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:16.701220    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:20.719520    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-327219 --alsologtostderr -v=3: (11.666121613s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-327219 -n old-k8s-version-327219
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-327219 -n old-k8s-version-327219: exit status 7 (85.749306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-327219 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (140.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-327219 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0917 18:07:30.960834    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:40.583526    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:42.237263    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:48.990244    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:51.442968    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:56.782738    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:07:57.662490    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:08.428665    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:08.430264    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:08.437034    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:08.449661    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:08.471017    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:08.512263    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:08.593655    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:08.755369    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:09.076978    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:09.718606    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:11.000952    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:13.562745    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:18.684960    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:24.485055    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:28.926384    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:32.404793    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:36.136929    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:37.544586    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:37.550905    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:37.562336    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:37.583889    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:37.625445    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:37.706974    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:37.868541    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:38.190318    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:38.831834    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:40.113320    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:42.674717    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:47.796523    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:49.407850    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:08:58.037812    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:09:03.346655    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:09:18.520023    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:09:19.584178    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:09:20.277226    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:09:30.369970    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:09:39.104862    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-327219 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m20.286407729s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-327219 -n old-k8s-version-327219
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (140.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zsqdd" [b90e4500-701f-4100-8dd6-3ed9a98f4fb4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003985407s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zsqdd" [b90e4500-701f-4100-8dd6-3ed9a98f4fb4] Running
E0917 18:09:54.326762    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003664371s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-327219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-327219 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-327219 --alsologtostderr -v=1
E0917 18:09:56.722549    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-327219 -n old-k8s-version-327219
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-327219 -n old-k8s-version-327219: exit status 2 (334.057789ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-327219 -n old-k8s-version-327219
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-327219 -n old-k8s-version-327219: exit status 2 (367.867053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-327219 --alsologtostderr -v=1
E0917 18:09:58.377472    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-327219 -n old-k8s-version-327219
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-327219 -n old-k8s-version-327219
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-767653 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 18:10:10.116719    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:10.123713    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:10.135198    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:10.157280    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:10.198746    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:10.280865    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:10.442836    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:10.764906    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:11.407025    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:12.688510    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:15.249843    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:20.371732    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:24.425314    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:26.080728    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:30.613802    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-767653 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (46.877962861s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-767653 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b0b4d609-4f5e-412f-8c53-c3fe2b37c152] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0917 18:10:51.095927    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:10:52.291986    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [b0b4d609-4f5e-412f-8c53-c3fe2b37c152] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004017173s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-767653 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-767653 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-767653 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-767653 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-767653 --alsologtostderr -v=3: (11.255217902s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-767653 -n embed-certs-767653
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-767653 -n embed-certs-767653: exit status 7 (76.713833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-767653 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-767653 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 18:11:21.403832    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:11:25.919854    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-767653 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.760810832s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-767653 -n embed-certs-767653
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q9dbh" [1d8df8fb-d907-4d02-930c-5c8659f66bb6] Running
E0917 18:11:32.058334    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003612584s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q9dbh" [1d8df8fb-d907-4d02-930c-5c8659f66bb6] Running
E0917 18:11:35.718725    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004961032s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-201228 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-201228 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-201228 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-201228 -n no-preload-201228
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-201228 -n no-preload-201228: exit status 2 (336.167149ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-201228 -n no-preload-201228
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-201228 -n no-preload-201228: exit status 2 (382.708496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-201228 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-201228 -n no-preload-201228
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-201228 -n no-preload-201228
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-615921 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 18:12:00.854766    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:00.861154    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:00.872534    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:00.893921    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:00.935302    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:01.016711    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:01.178061    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:01.499620    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:02.141305    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:03.423644    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:03.426440    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:05.986011    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:10.464456    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:11.108204    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:21.350078    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-615921 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (44.45866705s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-615921 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0b8a1430-6374-4f6e-8c1a-249bb7e0570c] Pending
helpers_test.go:344: "busybox" [0b8a1430-6374-4f6e-8c1a-249bb7e0570c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0b8a1430-6374-4f6e-8c1a-249bb7e0570c] Running
E0917 18:12:38.168395    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004514329s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-615921 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-615921 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-615921 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003524615s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-615921 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-615921 --alsologtostderr -v=3
E0917 18:12:41.832134    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-615921 --alsologtostderr -v=3: (10.995704832s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-615921 -n default-k8s-diff-port-615921
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-615921 -n default-k8s-diff-port-615921: exit status 7 (72.131653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-615921 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-615921 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 18:12:53.980895    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:12:56.782989    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/auto-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:13:08.428247    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kindnet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:13:08.430563    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:13:22.793451    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:13:36.134152    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:13:37.544419    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:14:05.245776    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/bridge-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:14:20.277217    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/functional-612770/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:14:39.104741    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/addons-731605/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:14:44.715621    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:14:56.722012    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/calico-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:14:58.377942    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/custom-flannel-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:15:10.117374    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:15:37.822182    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/kubenet-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-615921 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.824781519s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-615921 -n default-k8s-diff-port-615921
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dxmhl" [d39a8f49-5624-42d6-a1f2-825e8ffa0353] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005492694s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dxmhl" [d39a8f49-5624-42d6-a1f2-825e8ffa0353] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003692493s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-767653 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-767653 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-767653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-767653 -n embed-certs-767653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-767653 -n embed-certs-767653: exit status 2 (345.161628ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-767653 -n embed-certs-767653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-767653 -n embed-certs-767653: exit status 2 (360.518609ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-767653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-767653 -n embed-certs-767653
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-767653 -n embed-certs-767653
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-696824 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 18:16:25.918954    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/skaffold-722887/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:16:35.718463    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/false-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-696824 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (39.155613561s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-696824 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0917 18:16:37.359368    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:16:37.365739    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:16:37.377103    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:16:37.398528    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:16:37.439928    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:16:37.521351    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:16:37.683061    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-696824 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.261695188s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-696824 --alsologtostderr -v=3
E0917 18:16:38.005063    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:16:38.646818    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:16:39.928174    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:16:42.489674    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:16:47.611478    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-696824 --alsologtostderr -v=3: (10.943430603s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-696824 -n newest-cni-696824
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-696824 -n newest-cni-696824: exit status 7 (74.968814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-696824 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-696824 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0917 18:16:57.853420    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/no-preload-201228/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:17:00.854766    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-696824 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (20.47488315s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-696824 -n newest-cni-696824
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-696824 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-696824 --alsologtostderr -v=1
E0917 18:17:10.464130    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/enable-default-cni-319495/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-696824 -n newest-cni-696824
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-696824 -n newest-cni-696824: exit status 2 (355.691347ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-696824 -n newest-cni-696824
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-696824 -n newest-cni-696824: exit status 2 (355.881895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-696824 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-696824 -n newest-cni-696824
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-696824 -n newest-cni-696824
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-twcbq" [ba9b4849-e1b2-41ba-873e-727578ad2c7a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004140596s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-twcbq" [ba9b4849-e1b2-41ba-873e-727578ad2c7a] Running
E0917 18:17:28.557675    7562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-2253/.minikube/profiles/old-k8s-version-327219/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0040803s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-615921 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-615921 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-615921 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-615921 -n default-k8s-diff-port-615921
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-615921 -n default-k8s-diff-port-615921: exit status 2 (326.689428ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-615921 -n default-k8s-diff-port-615921
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-615921 -n default-k8s-diff-port-615921: exit status 2 (325.508729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-615921 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-615921 -n default-k8s-diff-port-615921
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-615921 -n default-k8s-diff-port-615921
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.85s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-449671 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-449671" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-449671
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-319495 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-319495" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-319495

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-319495" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-319495"

                                                
                                                
----------------------- debugLogs end: cilium-319495 [took: 5.097337878s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-319495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-319495
--- SKIP: TestNetworkPlugins/group/cilium (5.36s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-616121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-616121
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard