Test Report: Docker_Linux_docker_arm64 19576

                    
                      2e9b50ac88536491e648f1503809a6b59d99d481:2024-09-06:36104
                    
                

Test fail (2/343)

Order failed test Duration
33 TestAddons/parallel/Registry 74.56
111 TestFunctional/parallel/License 0.23
x
+
TestAddons/parallel/Registry (74.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.79225ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-f4qv7" [4c34f666-b1de-4d3c-8f16-830242c1fba7] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00398949s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hmrwx" [5fe8e711-512c-42ae-88ce-cb1b93021495] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005928022s
addons_test.go:342: (dbg) Run:  kubectl --context addons-724441 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-724441 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-724441 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.110755292s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-724441 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 ip
2024/09/06 18:43:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-724441
helpers_test.go:235: (dbg) docker inspect addons-724441:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dd11caca81fbfa2f70690a27f7c8f3bb5b0a370d52378f7b078ed9bc6de4930",
	        "Created": "2024-09-06T18:30:01.599006806Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8792,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-06T18:30:01.819998076Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8411aacd61cb8f2a7ae48c92e2c9e76ad4dd701b3dba8b30393c5cc31fbd2b15",
	        "ResolvConfPath": "/var/lib/docker/containers/3dd11caca81fbfa2f70690a27f7c8f3bb5b0a370d52378f7b078ed9bc6de4930/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dd11caca81fbfa2f70690a27f7c8f3bb5b0a370d52378f7b078ed9bc6de4930/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dd11caca81fbfa2f70690a27f7c8f3bb5b0a370d52378f7b078ed9bc6de4930/hosts",
	        "LogPath": "/var/lib/docker/containers/3dd11caca81fbfa2f70690a27f7c8f3bb5b0a370d52378f7b078ed9bc6de4930/3dd11caca81fbfa2f70690a27f7c8f3bb5b0a370d52378f7b078ed9bc6de4930-json.log",
	        "Name": "/addons-724441",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-724441:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-724441",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8f14c1a8c0623a86e95c4975c7aee86fe0f07de563bd5c33d3e1af12d40e5a97-init/diff:/var/lib/docker/overlay2/25b53ddba23215a8fee2014c0a8f80a3e09cb04e78fcc82368566f86c97e16cc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8f14c1a8c0623a86e95c4975c7aee86fe0f07de563bd5c33d3e1af12d40e5a97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8f14c1a8c0623a86e95c4975c7aee86fe0f07de563bd5c33d3e1af12d40e5a97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8f14c1a8c0623a86e95c4975c7aee86fe0f07de563bd5c33d3e1af12d40e5a97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-724441",
	                "Source": "/var/lib/docker/volumes/addons-724441/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-724441",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-724441",
	                "name.minikube.sigs.k8s.io": "addons-724441",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "316c8add2e5c17eb34361566d3d8d057e645234ee379a97f871a6a34685bd9e8",
	            "SandboxKey": "/var/run/docker/netns/316c8add2e5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-724441": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "79f666743c62c88a7da9be43d3da43c29527463d4718722f54ef92d52c91e276",
	                    "EndpointID": "b44bc2040f6256849231355ccf61b8ece9bb1738334a17c6bd66dd1873a86263",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-724441",
	                        "3dd11caca81f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-724441 -n addons-724441
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-724441 logs -n 25: (1.276305131s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-593927   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-593927                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-593927                                                                     | download-only-593927   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | -o=json --download-only                                                                     | download-only-454037   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-454037                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-454037                                                                     | download-only-454037   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-593927                                                                     | download-only-593927   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-454037                                                                     | download-only-454037   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | --download-only -p                                                                          | download-docker-333436 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | download-docker-333436                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-333436                                                                   | download-docker-333436 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-294650   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | binary-mirror-294650                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45513                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-294650                                                                     | binary-mirror-294650   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| addons  | enable dashboard -p                                                                         | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-724441                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-724441                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-724441 --wait=true                                                                | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-724441 addons disable                                                                | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:33 UTC | 06 Sep 24 18:34 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-724441 addons disable                                                                | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-724441 addons                                                                        | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-724441 addons                                                                        | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:42 UTC | 06 Sep 24 18:42 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:43 UTC | 06 Sep 24 18:43 UTC |
	|         | -p addons-724441                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-724441 ssh cat                                                                       | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:43 UTC | 06 Sep 24 18:43 UTC |
	|         | /opt/local-path-provisioner/pvc-7aca5f61-b674-43cb-8d89-d088d3ea181f_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-724441 addons disable                                                                | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:43 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-724441 ip                                                                            | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:43 UTC | 06 Sep 24 18:43 UTC |
	| addons  | addons-724441 addons disable                                                                | addons-724441          | jenkins | v1.34.0 | 06 Sep 24 18:43 UTC | 06 Sep 24 18:43 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:36
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:36.780153    8287 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:36.780313    8287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:36.780325    8287 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:36.780345    8287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:36.780624    8287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
	I0906 18:29:36.781091    8287 out.go:352] Setting JSON to false
	I0906 18:29:36.781895    8287 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":722,"bootTime":1725646655,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 18:29:36.781964    8287 start.go:139] virtualization:  
	I0906 18:29:36.784675    8287 out.go:177] * [addons-724441] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0906 18:29:36.786005    8287 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:29:36.786120    8287 notify.go:220] Checking for updates...
	I0906 18:29:36.788353    8287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:36.789568    8287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig
	I0906 18:29:36.790773    8287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube
	I0906 18:29:36.792062    8287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 18:29:36.793210    8287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:29:36.794589    8287 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:36.816121    8287 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0906 18:29:36.816250    8287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:36.887088    8287 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-06 18:29:36.873248999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:36.887203    8287 docker.go:318] overlay module found
	I0906 18:29:36.888585    8287 out.go:177] * Using the docker driver based on user configuration
	I0906 18:29:36.889849    8287 start.go:297] selected driver: docker
	I0906 18:29:36.889868    8287 start.go:901] validating driver "docker" against <nil>
	I0906 18:29:36.889881    8287 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:29:36.890494    8287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:36.940970    8287 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-06 18:29:36.931856629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:36.941128    8287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:36.941400    8287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:29:36.942642    8287 out.go:177] * Using Docker driver with root privileges
	I0906 18:29:36.943629    8287 cni.go:84] Creating CNI manager for ""
	I0906 18:29:36.943657    8287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 18:29:36.943664    8287 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 18:29:36.943744    8287 start.go:340] cluster config:
	{Name:addons-724441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-724441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I0906 18:29:36.945014    8287 out.go:177] * Starting "addons-724441" primary control-plane node in "addons-724441" cluster
	I0906 18:29:36.946595    8287 cache.go:121] Beginning downloading kic base image for docker with docker
	I0906 18:29:36.947988    8287 out.go:177] * Pulling base image v0.0.45 ...
	I0906 18:29:36.949230    8287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 18:29:36.949282    8287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 18:29:36.949288    8287 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0906 18:29:36.949294    8287 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:36.949374    8287 preload.go:172] Found /home/jenkins/minikube-integration/19576-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 18:29:36.949404    8287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 18:29:36.949743    8287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/config.json ...
	I0906 18:29:36.949769    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/config.json: {Name:mk3fd9958bb95b09eb3ace1f1b4ba185adc76394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:29:36.964441    8287 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0906 18:29:36.964553    8287 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0906 18:29:36.964576    8287 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0906 18:29:36.964586    8287 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0906 18:29:36.964595    8287 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0906 18:29:36.964600    8287 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from local cache
	I0906 18:29:54.142181    8287 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from cached tarball
	I0906 18:29:54.142220    8287 cache.go:194] Successfully downloaded all kic artifacts
	I0906 18:29:54.142272    8287 start.go:360] acquireMachinesLock for addons-724441: {Name:mk12dc25c05958298ae164009e7440df0c321eb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:54.142395    8287 start.go:364] duration metric: took 100.594µs to acquireMachinesLock for "addons-724441"
	I0906 18:29:54.142424    8287 start.go:93] Provisioning new machine with config: &{Name:addons-724441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-724441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 18:29:54.142508    8287 start.go:125] createHost starting for "" (driver="docker")
	I0906 18:29:54.143970    8287 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0906 18:29:54.144205    8287 start.go:159] libmachine.API.Create for "addons-724441" (driver="docker")
	I0906 18:29:54.144233    8287 client.go:168] LocalClient.Create starting
	I0906 18:29:54.144336    8287 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19576-2220/.minikube/certs/ca.pem
	I0906 18:29:54.853251    8287 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19576-2220/.minikube/certs/cert.pem
	I0906 18:29:55.133555    8287 cli_runner.go:164] Run: docker network inspect addons-724441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0906 18:29:55.148571    8287 cli_runner.go:211] docker network inspect addons-724441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0906 18:29:55.148663    8287 network_create.go:284] running [docker network inspect addons-724441] to gather additional debugging logs...
	I0906 18:29:55.148688    8287 cli_runner.go:164] Run: docker network inspect addons-724441
	W0906 18:29:55.164419    8287 cli_runner.go:211] docker network inspect addons-724441 returned with exit code 1
	I0906 18:29:55.164456    8287 network_create.go:287] error running [docker network inspect addons-724441]: docker network inspect addons-724441: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-724441 not found
	I0906 18:29:55.164479    8287 network_create.go:289] output of [docker network inspect addons-724441]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-724441 not found
	
	** /stderr **
	I0906 18:29:55.164592    8287 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 18:29:55.179990    8287 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001756990}
	I0906 18:29:55.180038    8287 network_create.go:124] attempt to create docker network addons-724441 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0906 18:29:55.180099    8287 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-724441 addons-724441
	I0906 18:29:55.248095    8287 network_create.go:108] docker network addons-724441 192.168.49.0/24 created
	I0906 18:29:55.248128    8287 kic.go:121] calculated static IP "192.168.49.2" for the "addons-724441" container
	I0906 18:29:55.248200    8287 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 18:29:55.263673    8287 cli_runner.go:164] Run: docker volume create addons-724441 --label name.minikube.sigs.k8s.io=addons-724441 --label created_by.minikube.sigs.k8s.io=true
	I0906 18:29:55.279932    8287 oci.go:103] Successfully created a docker volume addons-724441
	I0906 18:29:55.280026    8287 cli_runner.go:164] Run: docker run --rm --name addons-724441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-724441 --entrypoint /usr/bin/test -v addons-724441:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib
	I0906 18:29:57.468189    8287 cli_runner.go:217] Completed: docker run --rm --name addons-724441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-724441 --entrypoint /usr/bin/test -v addons-724441:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib: (2.188117109s)
	I0906 18:29:57.468217    8287 oci.go:107] Successfully prepared a docker volume addons-724441
	I0906 18:29:57.468236    8287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 18:29:57.468256    8287 kic.go:194] Starting extracting preloaded images to volume ...
	I0906 18:29:57.468331    8287 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19576-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-724441:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir
	I0906 18:30:01.513009    8287 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19576-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-724441:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir: (4.044625698s)
	I0906 18:30:01.513051    8287 kic.go:203] duration metric: took 4.044789962s to extract preloaded images to volume ...
	W0906 18:30:01.513263    8287 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0906 18:30:01.513455    8287 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 18:30:01.580002    8287 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-724441 --name addons-724441 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-724441 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-724441 --network addons-724441 --ip 192.168.49.2 --volume addons-724441:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85
	I0906 18:30:02.076678    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Running}}
	I0906 18:30:02.123389    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:02.153739    8287 cli_runner.go:164] Run: docker exec addons-724441 stat /var/lib/dpkg/alternatives/iptables
	I0906 18:30:02.255296    8287 oci.go:144] the created container "addons-724441" has a running status.
	I0906 18:30:02.255324    8287 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa...
	I0906 18:30:03.007079    8287 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 18:30:03.035776    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:03.054604    8287 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 18:30:03.054629    8287 kic_runner.go:114] Args: [docker exec --privileged addons-724441 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 18:30:03.115358    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:03.135943    8287 machine.go:93] provisionDockerMachine start ...
	I0906 18:30:03.136046    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:03.156054    8287 main.go:141] libmachine: Using SSH client type: native
	I0906 18:30:03.156314    8287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0906 18:30:03.156323    8287 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 18:30:03.281298    8287 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-724441
	
	I0906 18:30:03.281323    8287 ubuntu.go:169] provisioning hostname "addons-724441"
	I0906 18:30:03.281451    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:03.302621    8287 main.go:141] libmachine: Using SSH client type: native
	I0906 18:30:03.302862    8287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0906 18:30:03.302881    8287 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-724441 && echo "addons-724441" | sudo tee /etc/hostname
	I0906 18:30:03.443162    8287 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-724441
	
	I0906 18:30:03.443254    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:03.463037    8287 main.go:141] libmachine: Using SSH client type: native
	I0906 18:30:03.463282    8287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0906 18:30:03.463301    8287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-724441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-724441/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-724441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:30:03.581482    8287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:30:03.581537    8287 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19576-2220/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-2220/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-2220/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-2220/.minikube}
	I0906 18:30:03.581572    8287 ubuntu.go:177] setting up certificates
	I0906 18:30:03.581582    8287 provision.go:84] configureAuth start
	I0906 18:30:03.581642    8287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-724441
	I0906 18:30:03.598580    8287 provision.go:143] copyHostCerts
	I0906 18:30:03.598668    8287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-2220/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-2220/.minikube/ca.pem (1078 bytes)
	I0906 18:30:03.598799    8287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-2220/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-2220/.minikube/cert.pem (1123 bytes)
	I0906 18:30:03.598861    8287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-2220/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-2220/.minikube/key.pem (1679 bytes)
	I0906 18:30:03.598906    8287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-2220/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-2220/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-2220/.minikube/certs/ca-key.pem org=jenkins.addons-724441 san=[127.0.0.1 192.168.49.2 addons-724441 localhost minikube]
	I0906 18:30:03.832757    8287 provision.go:177] copyRemoteCerts
	I0906 18:30:03.832830    8287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:30:03.832878    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:03.851173    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:03.938187    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:30:03.962835    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 18:30:03.986481    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0906 18:30:04.011652    8287 provision.go:87] duration metric: took 430.054134ms to configureAuth
	I0906 18:30:04.011681    8287 ubuntu.go:193] setting minikube options for container-runtime
	I0906 18:30:04.011909    8287 config.go:182] Loaded profile config "addons-724441": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:30:04.011968    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:04.030109    8287 main.go:141] libmachine: Using SSH client type: native
	I0906 18:30:04.030402    8287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0906 18:30:04.030416    8287 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0906 18:30:04.153577    8287 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0906 18:30:04.153601    8287 ubuntu.go:71] root file system type: overlay
	I0906 18:30:04.153726    8287 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0906 18:30:04.153794    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:04.170632    8287 main.go:141] libmachine: Using SSH client type: native
	I0906 18:30:04.170875    8287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0906 18:30:04.170958    8287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0906 18:30:04.300459    8287 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0906 18:30:04.300541    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:04.317295    8287 main.go:141] libmachine: Using SSH client type: native
	I0906 18:30:04.317582    8287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0906 18:30:04.317608    8287 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0906 18:30:05.101189    8287 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-08-27 14:13:43.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-06 18:30:04.294851180 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0906 18:30:05.101270    8287 machine.go:96] duration metric: took 1.965305706s to provisionDockerMachine
	I0906 18:30:05.101298    8287 client.go:171] duration metric: took 10.957053445s to LocalClient.Create
	I0906 18:30:05.101356    8287 start.go:167] duration metric: took 10.957124952s to libmachine.API.Create "addons-724441"
	I0906 18:30:05.101401    8287 start.go:293] postStartSetup for "addons-724441" (driver="docker")
	I0906 18:30:05.101428    8287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:30:05.101532    8287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:30:05.101614    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:05.120234    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:05.218418    8287 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:30:05.221556    8287 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 18:30:05.221596    8287 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 18:30:05.221608    8287 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 18:30:05.221631    8287 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0906 18:30:05.221647    8287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-2220/.minikube/addons for local assets ...
	I0906 18:30:05.221725    8287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-2220/.minikube/files for local assets ...
	I0906 18:30:05.221752    8287 start.go:296] duration metric: took 120.329726ms for postStartSetup
	I0906 18:30:05.222060    8287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-724441
	I0906 18:30:05.237709    8287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/config.json ...
	I0906 18:30:05.237992    8287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:30:05.238049    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:05.253610    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:05.337888    8287 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 18:30:05.342694    8287 start.go:128] duration metric: took 11.20017043s to createHost
	I0906 18:30:05.342718    8287 start.go:83] releasing machines lock for "addons-724441", held for 11.200310005s
	I0906 18:30:05.342801    8287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-724441
	I0906 18:30:05.359924    8287 ssh_runner.go:195] Run: cat /version.json
	I0906 18:30:05.359986    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:05.360241    8287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:30:05.360313    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:05.380523    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:05.387186    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:05.473093    8287 ssh_runner.go:195] Run: systemctl --version
	I0906 18:30:05.603055    8287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 18:30:05.606972    8287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0906 18:30:05.631141    8287 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0906 18:30:05.631215    8287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:30:05.661142    8287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0906 18:30:05.661217    8287 start.go:495] detecting cgroup driver to use...
	I0906 18:30:05.661263    8287 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0906 18:30:05.661475    8287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:30:05.677491    8287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 18:30:05.687030    8287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 18:30:05.696266    8287 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 18:30:05.696378    8287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 18:30:05.705673    8287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 18:30:05.715496    8287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 18:30:05.725121    8287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 18:30:05.734929    8287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:30:05.743801    8287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 18:30:05.753724    8287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 18:30:05.763440    8287 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 18:30:05.772952    8287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:30:05.781553    8287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:30:05.789871    8287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:05.881210    8287 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 18:30:05.981867    8287 start.go:495] detecting cgroup driver to use...
	I0906 18:30:05.981915    8287 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0906 18:30:05.981981    8287 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0906 18:30:05.994711    8287 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0906 18:30:05.994778    8287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 18:30:06.012166    8287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:30:06.031063    8287 ssh_runner.go:195] Run: which cri-dockerd
	I0906 18:30:06.037993    8287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0906 18:30:06.049858    8287 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0906 18:30:06.068892    8287 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0906 18:30:06.166355    8287 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0906 18:30:06.261906    8287 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0906 18:30:06.262099    8287 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0906 18:30:06.291249    8287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:06.393976    8287 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0906 18:30:06.663814    8287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0906 18:30:06.675727    8287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 18:30:06.687976    8287 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0906 18:30:06.774000    8287 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0906 18:30:06.854911    8287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:06.938256    8287 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0906 18:30:06.952722    8287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0906 18:30:06.964262    8287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:07.058119    8287 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0906 18:30:07.125682    8287 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0906 18:30:07.125813    8287 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0906 18:30:07.129646    8287 start.go:563] Will wait 60s for crictl version
	I0906 18:30:07.129743    8287 ssh_runner.go:195] Run: which crictl
	I0906 18:30:07.133448    8287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:30:07.169351    8287 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0906 18:30:07.169480    8287 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 18:30:07.192331    8287 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0906 18:30:07.217805    8287 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0906 18:30:07.217898    8287 cli_runner.go:164] Run: docker network inspect addons-724441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 18:30:07.232867    8287 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0906 18:30:07.236688    8287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:30:07.247792    8287 kubeadm.go:883] updating cluster {Name:addons-724441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-724441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 18:30:07.247910    8287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 18:30:07.247972    8287 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 18:30:07.266215    8287 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 18:30:07.266236    8287 docker.go:615] Images already preloaded, skipping extraction
	I0906 18:30:07.266308    8287 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0906 18:30:07.284453    8287 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0906 18:30:07.284476    8287 cache_images.go:84] Images are preloaded, skipping loading
	I0906 18:30:07.284495    8287 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0906 18:30:07.284597    8287 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-724441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-724441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:30:07.284664    8287 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0906 18:30:07.332414    8287 cni.go:84] Creating CNI manager for ""
	I0906 18:30:07.332440    8287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 18:30:07.332450    8287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 18:30:07.332471    8287 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-724441 NodeName:addons-724441 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 18:30:07.332617    8287 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-724441"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 18:30:07.332706    8287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:30:07.341491    8287 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 18:30:07.341599    8287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 18:30:07.350042    8287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0906 18:30:07.368333    8287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:30:07.386488    8287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0906 18:30:07.404561    8287 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0906 18:30:07.407900    8287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:30:07.418567    8287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:07.516010    8287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:07.530144    8287 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441 for IP: 192.168.49.2
	I0906 18:30:07.530167    8287 certs.go:194] generating shared ca certs ...
	I0906 18:30:07.530182    8287 certs.go:226] acquiring lock for ca certs: {Name:mk5f2aa6dec75c1270888b4c7e740faf9e29bd44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:07.530318    8287 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-2220/.minikube/ca.key
	I0906 18:30:07.797899    8287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-2220/.minikube/ca.crt ...
	I0906 18:30:07.797928    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/ca.crt: {Name:mk1972326537ee601fcb735b4ce1ff01887ef730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:07.798119    8287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-2220/.minikube/ca.key ...
	I0906 18:30:07.798135    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/ca.key: {Name:mk15fa3d288ef2323e18c18b24f4bb2584981005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:07.798222    8287 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-2220/.minikube/proxy-client-ca.key
	I0906 18:30:08.369796    8287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-2220/.minikube/proxy-client-ca.crt ...
	I0906 18:30:08.369824    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/proxy-client-ca.crt: {Name:mkbea377fc400d52fa6f9cf825334f3947dab86d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:08.370010    8287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-2220/.minikube/proxy-client-ca.key ...
	I0906 18:30:08.370022    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/proxy-client-ca.key: {Name:mka7759c1156b28a268aeb73b16f3818856fe2cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:08.370109    8287 certs.go:256] generating profile certs ...
	I0906 18:30:08.370166    8287 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.key
	I0906 18:30:08.370184    8287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt with IP's: []
	I0906 18:30:09.215637    8287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt ...
	I0906 18:30:09.215669    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: {Name:mk16cb57a5462e1053c7278e4b8072ec01171e6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:09.215885    8287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.key ...
	I0906 18:30:09.215900    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.key: {Name:mkf7487d99a2bf4fc3e6cc4d21bd82e64f4d70de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:09.215992    8287 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.key.cf9374aa
	I0906 18:30:09.216012    8287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.crt.cf9374aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0906 18:30:09.334622    8287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.crt.cf9374aa ...
	I0906 18:30:09.334651    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.crt.cf9374aa: {Name:mk5c375f286b9d64ab4679745b2b263a44df580c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:09.334826    8287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.key.cf9374aa ...
	I0906 18:30:09.334842    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.key.cf9374aa: {Name:mk911ab0bb2c857747b1f2ed5ccc0213f77e0dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:09.334928    8287 certs.go:381] copying /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.crt.cf9374aa -> /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.crt
	I0906 18:30:09.335014    8287 certs.go:385] copying /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.key.cf9374aa -> /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.key
	I0906 18:30:09.335069    8287 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/proxy-client.key
	I0906 18:30:09.335092    8287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/proxy-client.crt with IP's: []
	I0906 18:30:09.838584    8287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/proxy-client.crt ...
	I0906 18:30:09.838661    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/proxy-client.crt: {Name:mk777f956ef33d94e0675bd798618ba5eb369e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:09.838890    8287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/proxy-client.key ...
	I0906 18:30:09.838925    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/proxy-client.key: {Name:mk36716f881e06274eea09b310db57147e396223 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:09.839163    8287 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-2220/.minikube/certs/ca-key.pem (1679 bytes)
	I0906 18:30:09.839227    8287 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-2220/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:30:09.839290    8287 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-2220/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:30:09.839345    8287 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-2220/.minikube/certs/key.pem (1679 bytes)
	I0906 18:30:09.839978    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:30:09.864277    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 18:30:09.888184    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:30:09.912193    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:30:09.935963    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 18:30:09.959279    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 18:30:09.982527    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:30:10.007565    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 18:30:10.042121    8287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2220/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:30:10.070195    8287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 18:30:10.090862    8287 ssh_runner.go:195] Run: openssl version
	I0906 18:30:10.096909    8287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:30:10.107553    8287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:10.111261    8287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:10.111333    8287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:10.118807    8287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:30:10.128626    8287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:30:10.132087    8287 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:30:10.132136    8287 kubeadm.go:392] StartCluster: {Name:addons-724441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-724441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:30:10.132268    8287 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0906 18:30:10.149606    8287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 18:30:10.160916    8287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 18:30:10.170226    8287 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0906 18:30:10.170290    8287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 18:30:10.179655    8287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 18:30:10.179675    8287 kubeadm.go:157] found existing configuration files:
	
	I0906 18:30:10.179750    8287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 18:30:10.188561    8287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 18:30:10.188665    8287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 18:30:10.197369    8287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 18:30:10.206467    8287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 18:30:10.206575    8287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 18:30:10.215117    8287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 18:30:10.223867    8287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 18:30:10.223939    8287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 18:30:10.232408    8287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 18:30:10.241261    8287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 18:30:10.241342    8287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 18:30:10.249687    8287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 18:30:10.292447    8287 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 18:30:10.292777    8287 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 18:30:10.314991    8287 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0906 18:30:10.315062    8287 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0906 18:30:10.315103    8287 kubeadm.go:310] OS: Linux
	I0906 18:30:10.315152    8287 kubeadm.go:310] CGROUPS_CPU: enabled
	I0906 18:30:10.315202    8287 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0906 18:30:10.315251    8287 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0906 18:30:10.315301    8287 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0906 18:30:10.315352    8287 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0906 18:30:10.315402    8287 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0906 18:30:10.315448    8287 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0906 18:30:10.315501    8287 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0906 18:30:10.315550    8287 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0906 18:30:10.378046    8287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 18:30:10.378194    8287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 18:30:10.378323    8287 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 18:30:10.393757    8287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 18:30:10.399878    8287 out.go:235]   - Generating certificates and keys ...
	I0906 18:30:10.400057    8287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 18:30:10.400159    8287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 18:30:10.517597    8287 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 18:30:11.228275    8287 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 18:30:12.143537    8287 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 18:30:12.433508    8287 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 18:30:12.626478    8287 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 18:30:12.626935    8287 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-724441 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0906 18:30:12.832741    8287 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 18:30:12.833102    8287 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-724441 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0906 18:30:12.960251    8287 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 18:30:13.319270    8287 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 18:30:13.603598    8287 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 18:30:13.603930    8287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 18:30:13.784992    8287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 18:30:14.110057    8287 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 18:30:14.607131    8287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 18:30:15.011900    8287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 18:30:15.363198    8287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 18:30:15.363908    8287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 18:30:15.366904    8287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 18:30:15.370177    8287 out.go:235]   - Booting up control plane ...
	I0906 18:30:15.370274    8287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 18:30:15.370348    8287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 18:30:15.370413    8287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 18:30:15.380676    8287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 18:30:15.387283    8287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 18:30:15.387556    8287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 18:30:15.495565    8287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 18:30:15.496020    8287 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 18:30:16.997845    8287 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501771487s
	I0906 18:30:16.997956    8287 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 18:30:23.999616    8287 kubeadm.go:310] [api-check] The API server is healthy after 7.001811538s
	I0906 18:30:24.030655    8287 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 18:30:24.048190    8287 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 18:30:24.076360    8287 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 18:30:24.076826    8287 kubeadm.go:310] [mark-control-plane] Marking the node addons-724441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 18:30:24.088805    8287 kubeadm.go:310] [bootstrap-token] Using token: 888q2u.ujs8443qs9f67403
	I0906 18:30:24.091388    8287 out.go:235]   - Configuring RBAC rules ...
	I0906 18:30:24.091519    8287 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 18:30:24.096834    8287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 18:30:24.106187    8287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 18:30:24.110903    8287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 18:30:24.117440    8287 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 18:30:24.127079    8287 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 18:30:24.409071    8287 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 18:30:24.846238    8287 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 18:30:25.406436    8287 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 18:30:25.407530    8287 kubeadm.go:310] 
	I0906 18:30:25.407611    8287 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 18:30:25.407622    8287 kubeadm.go:310] 
	I0906 18:30:25.407696    8287 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 18:30:25.407705    8287 kubeadm.go:310] 
	I0906 18:30:25.407729    8287 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 18:30:25.407797    8287 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 18:30:25.407853    8287 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 18:30:25.407862    8287 kubeadm.go:310] 
	I0906 18:30:25.407914    8287 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 18:30:25.407922    8287 kubeadm.go:310] 
	I0906 18:30:25.407968    8287 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 18:30:25.407976    8287 kubeadm.go:310] 
	I0906 18:30:25.408026    8287 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 18:30:25.408102    8287 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 18:30:25.408176    8287 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 18:30:25.408185    8287 kubeadm.go:310] 
	I0906 18:30:25.408266    8287 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 18:30:25.408347    8287 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 18:30:25.408354    8287 kubeadm.go:310] 
	I0906 18:30:25.408434    8287 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 888q2u.ujs8443qs9f67403 \
	I0906 18:30:25.408555    8287 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2956532cf3929c78509205450c725935971a67cd99f3d59900f58de2f07be9e1 \
	I0906 18:30:25.408580    8287 kubeadm.go:310] 	--control-plane 
	I0906 18:30:25.408589    8287 kubeadm.go:310] 
	I0906 18:30:25.408671    8287 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 18:30:25.408678    8287 kubeadm.go:310] 
	I0906 18:30:25.408758    8287 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 888q2u.ujs8443qs9f67403 \
	I0906 18:30:25.408859    8287 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2956532cf3929c78509205450c725935971a67cd99f3d59900f58de2f07be9e1 
	I0906 18:30:25.412650    8287 kubeadm.go:310] W0906 18:30:10.288946    1813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:25.412967    8287 kubeadm.go:310] W0906 18:30:10.289996    1813 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:25.413187    8287 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0906 18:30:25.413302    8287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 18:30:25.413324    8287 cni.go:84] Creating CNI manager for ""
	I0906 18:30:25.413345    8287 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 18:30:25.417908    8287 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0906 18:30:25.420567    8287 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0906 18:30:25.432203    8287 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0906 18:30:25.451950    8287 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 18:30:25.452039    8287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:25.452081    8287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-724441 minikube.k8s.io/updated_at=2024_09_06T18_30_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=addons-724441 minikube.k8s.io/primary=true
	I0906 18:30:25.613524    8287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:25.613597    8287 ops.go:34] apiserver oom_adj: -16
	I0906 18:30:26.114641    8287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:26.614193    8287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:27.113587    8287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:27.613680    8287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:28.114608    8287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:28.614622    8287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:29.114595    8287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:29.614573    8287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:29.716095    8287 kubeadm.go:1113] duration metric: took 4.264134786s to wait for elevateKubeSystemPrivileges
	I0906 18:30:29.716126    8287 kubeadm.go:394] duration metric: took 19.583993491s to StartCluster
	I0906 18:30:29.716143    8287 settings.go:142] acquiring lock: {Name:mk716bfad607508bc477e98fe4a9bc8ea674674f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:29.716250    8287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-2220/kubeconfig
	I0906 18:30:29.717107    8287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/kubeconfig: {Name:mk056f99455c7b1420541fa83a3c49635f1402e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:29.717644    8287 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0906 18:30:29.718337    8287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 18:30:29.718668    8287 config.go:182] Loaded profile config "addons-724441": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:30:29.718716    8287 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0906 18:30:29.719126    8287 addons.go:69] Setting yakd=true in profile "addons-724441"
	I0906 18:30:29.719159    8287 addons.go:234] Setting addon yakd=true in "addons-724441"
	I0906 18:30:29.719192    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.719765    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.721431    8287 addons.go:69] Setting metrics-server=true in profile "addons-724441"
	I0906 18:30:29.721461    8287 addons.go:234] Setting addon metrics-server=true in "addons-724441"
	I0906 18:30:29.721553    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.722037    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.723491    8287 out.go:177] * Verifying Kubernetes components...
	I0906 18:30:29.723695    8287 addons.go:69] Setting cloud-spanner=true in profile "addons-724441"
	I0906 18:30:29.723730    8287 addons.go:234] Setting addon cloud-spanner=true in "addons-724441"
	I0906 18:30:29.723758    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.724286    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.725726    8287 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-724441"
	I0906 18:30:29.725766    8287 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-724441"
	I0906 18:30:29.725900    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.726094    8287 addons.go:69] Setting registry=true in profile "addons-724441"
	I0906 18:30:29.726145    8287 addons.go:234] Setting addon registry=true in "addons-724441"
	I0906 18:30:29.726172    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.726793    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.732087    8287 addons.go:69] Setting storage-provisioner=true in profile "addons-724441"
	I0906 18:30:29.732131    8287 addons.go:234] Setting addon storage-provisioner=true in "addons-724441"
	I0906 18:30:29.732188    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.732755    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.738972    8287 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-724441"
	I0906 18:30:29.739016    8287 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-724441"
	I0906 18:30:29.739373    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.739618    8287 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-724441"
	I0906 18:30:29.739676    8287 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-724441"
	I0906 18:30:29.739712    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.740190    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.757485    8287 addons.go:69] Setting volcano=true in profile "addons-724441"
	I0906 18:30:29.757545    8287 addons.go:234] Setting addon volcano=true in "addons-724441"
	I0906 18:30:29.757583    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.758119    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.758274    8287 addons.go:69] Setting default-storageclass=true in profile "addons-724441"
	I0906 18:30:29.758329    8287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-724441"
	I0906 18:30:29.758588    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.790494    8287 addons.go:69] Setting volumesnapshots=true in profile "addons-724441"
	I0906 18:30:29.790603    8287 addons.go:234] Setting addon volumesnapshots=true in "addons-724441"
	I0906 18:30:29.790674    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.791251    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.791479    8287 addons.go:69] Setting gcp-auth=true in profile "addons-724441"
	I0906 18:30:29.791550    8287 mustload.go:65] Loading cluster: addons-724441
	I0906 18:30:29.791753    8287 config.go:182] Loaded profile config "addons-724441": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:30:29.792076    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.820449    8287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:29.821411    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.824374    8287 addons.go:69] Setting ingress=true in profile "addons-724441"
	I0906 18:30:29.824472    8287 addons.go:234] Setting addon ingress=true in "addons-724441"
	I0906 18:30:29.824559    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.825136    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.842744    8287 addons.go:69] Setting ingress-dns=true in profile "addons-724441"
	I0906 18:30:29.842848    8287 addons.go:234] Setting addon ingress-dns=true in "addons-724441"
	I0906 18:30:29.842936    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.843554    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.861819    8287 addons.go:69] Setting inspektor-gadget=true in profile "addons-724441"
	I0906 18:30:29.861922    8287 addons.go:234] Setting addon inspektor-gadget=true in "addons-724441"
	I0906 18:30:29.861998    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:29.862727    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:29.943147    8287 out.go:177]   - Using image docker.io/registry:2.8.3
	I0906 18:30:29.945246    8287 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0906 18:30:29.955084    8287 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0906 18:30:29.955372    8287 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:29.955417    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0906 18:30:29.955520    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:29.961583    8287 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 18:30:29.961644    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0906 18:30:29.961746    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:29.965689    8287 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0906 18:30:29.971160    8287 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0906 18:30:29.971257    8287 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0906 18:30:29.971388    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:29.987756    8287 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0906 18:30:29.987999    8287 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 18:30:30.002660    8287 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-724441"
	I0906 18:30:30.002786    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:30.004534    8287 addons.go:234] Setting addon default-storageclass=true in "addons-724441"
	I0906 18:30:30.004617    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:30.005219    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:30.021913    8287 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 18:30:30.022525    8287 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 18:30:30.022549    8287 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 18:30:30.022754    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:30.025658    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:30.028948    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:30.044379    8287 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0906 18:30:30.045378    8287 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 18:30:30.045366    8287 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:30.045597    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 18:30:30.045679    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:30.051967    8287 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 18:30:30.052304    8287 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0906 18:30:30.082642    8287 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:30.084024    8287 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0906 18:30:30.093637    8287 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:30.095182    8287 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:30.095202    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 18:30:30.095280    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:30.095881    8287 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 18:30:30.096391    8287 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 18:30:30.096410    8287 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 18:30:30.096482    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:30.101534    8287 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0906 18:30:30.108383    8287 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 18:30:30.109315    8287 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0906 18:30:30.110772    8287 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 18:30:30.110929    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0906 18:30:30.111008    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:30.120396    8287 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:30.120421    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0906 18:30:30.120499    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:30.131841    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.136117    8287 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 18:30:30.137166    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.138009    8287 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0906 18:30:30.138177    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.138840    8287 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:30.138862    8287 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 18:30:30.138927    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:30.140790    8287 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 18:30:30.140892    8287 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 18:30:30.141013    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:30.165570    8287 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 18:30:30.168366    8287 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 18:30:30.173515    8287 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 18:30:30.173666    8287 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0906 18:30:30.177517    8287 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 18:30:30.177547    8287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 18:30:30.177628    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:30.177972    8287 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:30.177984    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0906 18:30:30.178033    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:30.313782    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.325615    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.334129    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.341245    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.353726    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.356420    8287 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0906 18:30:30.356579    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.368831    8287 out.go:177]   - Using image docker.io/busybox:stable
	I0906 18:30:30.371722    8287 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:30.371743    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0906 18:30:30.371809    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:30.376182    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.377194    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.378056    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.378848    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:30.411570    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	W0906 18:30:30.412258    8287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0906 18:30:30.412304    8287 retry.go:31] will retry after 159.615299ms: ssh: handshake failed: EOF
	W0906 18:30:30.412846    8287 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0906 18:30:30.412861    8287 retry.go:31] will retry after 338.561295ms: ssh: handshake failed: EOF
	I0906 18:30:30.577593    8287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 18:30:30.577783    8287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:30.679053    8287 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 18:30:30.679077    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 18:30:30.754812    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:30.849794    8287 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 18:30:30.849863    8287 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 18:30:30.858507    8287 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 18:30:30.858575    8287 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 18:30:30.940001    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:31.119918    8287 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 18:30:31.119987    8287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 18:30:31.167657    8287 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 18:30:31.167679    8287 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 18:30:31.193453    8287 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 18:30:31.193518    8287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 18:30:31.209631    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:31.230138    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 18:30:31.264828    8287 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0906 18:30:31.264908    8287 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0906 18:30:31.291447    8287 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 18:30:31.291533    8287 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 18:30:31.382243    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:31.396857    8287 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:31.396881    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 18:30:31.431545    8287 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 18:30:31.431571    8287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 18:30:31.556435    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:31.621456    8287 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 18:30:31.621492    8287 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 18:30:31.695324    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:31.706128    8287 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:31.706164    8287 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 18:30:31.709048    8287 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 18:30:31.709123    8287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 18:30:31.735802    8287 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 18:30:31.735882    8287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 18:30:31.772137    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:31.871771    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:31.883989    8287 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0906 18:30:31.884063    8287 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0906 18:30:31.975782    8287 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 18:30:31.975810    8287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 18:30:31.988200    8287 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 18:30:31.988228    8287 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 18:30:32.006552    8287 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 18:30:32.006576    8287 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 18:30:32.061369    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:32.203800    8287 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0906 18:30:32.203827    8287 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0906 18:30:32.326761    8287 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 18:30:32.326788    8287 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 18:30:32.401939    8287 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 18:30:32.401965    8287 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 18:30:32.420758    8287 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 18:30:32.420785    8287 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 18:30:32.513187    8287 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:32.513211    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0906 18:30:32.623881    8287 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 18:30:32.623908    8287 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 18:30:32.687126    8287 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 18:30:32.687151    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 18:30:32.727810    8287 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:32.727834    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 18:30:32.774513    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:32.971205    8287 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:32.971231    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0906 18:30:33.043496    8287 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 18:30:33.043527    8287 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 18:30:33.109504    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:33.301727    8287 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 18:30:33.301752    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 18:30:33.337652    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:33.364903    8287 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.787066261s)
	I0906 18:30:33.365741    8287 node_ready.go:35] waiting up to 6m0s for node "addons-724441" to be "Ready" ...
	I0906 18:30:33.365918    8287 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.788253383s)
	I0906 18:30:33.365939    8287 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0906 18:30:33.369291    8287 node_ready.go:49] node "addons-724441" has status "Ready":"True"
	I0906 18:30:33.369318    8287 node_ready.go:38] duration metric: took 3.551044ms for node "addons-724441" to be "Ready" ...
	I0906 18:30:33.369328    8287 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:33.388730    8287 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:33.490346    8287 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 18:30:33.490435    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 18:30:33.809811    8287 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:33.809892    8287 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 18:30:33.834854    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:33.870044    8287 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-724441" context rescaled to 1 replicas
	I0906 18:30:35.418272    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:37.157280    8287 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 18:30:37.157431    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:37.184494    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:37.906035    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:38.405171    8287 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 18:30:38.682619    8287 addons.go:234] Setting addon gcp-auth=true in "addons-724441"
	I0906 18:30:38.682739    8287 host.go:66] Checking if "addons-724441" exists ...
	I0906 18:30:38.683264    8287 cli_runner.go:164] Run: docker container inspect addons-724441 --format={{.State.Status}}
	I0906 18:30:38.707050    8287 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 18:30:38.707112    8287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-724441
	I0906 18:30:38.733496    8287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/addons-724441/id_rsa Username:docker}
	I0906 18:30:39.948876    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:40.231458    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.476563436s)
	I0906 18:30:40.231490    8287 addons.go:475] Verifying addon ingress=true in "addons-724441"
	I0906 18:30:40.231665    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.291575343s)
	I0906 18:30:40.231741    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.02204006s)
	I0906 18:30:40.233718    8287 out.go:177] * Verifying ingress addon...
	I0906 18:30:40.236449    8287 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 18:30:40.250124    8287 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 18:30:40.250149    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:40.741634    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.241706    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:41.782347    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:42.248545    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:42.431148    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:42.790833    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:42.933860    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.703648168s)
	I0906 18:30:42.934113    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.377654575s)
	I0906 18:30:42.934150    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.551775139s)
	I0906 18:30:42.934180    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.238834937s)
	I0906 18:30:42.934410    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.162202587s)
	I0906 18:30:42.934572    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.062627807s)
	I0906 18:30:42.934611    8287 addons.go:475] Verifying addon registry=true in "addons-724441"
	I0906 18:30:42.934727    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.873294495s)
	I0906 18:30:42.934741    8287 addons.go:475] Verifying addon metrics-server=true in "addons-724441"
	I0906 18:30:42.934785    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.1602504s)
	I0906 18:30:42.935213    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.825599274s)
	W0906 18:30:42.935264    8287 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:42.935489    8287 retry.go:31] will retry after 304.578271ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:42.935381    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.59769157s)
	I0906 18:30:42.938109    8287 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-724441 service yakd-dashboard -n yakd-dashboard
	
	I0906 18:30:42.938263    8287 out.go:177] * Verifying registry addon...
	I0906 18:30:42.941695    8287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 18:30:42.957913    8287 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 18:30:42.957935    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.240566    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:43.289134    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:43.446441    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.798758    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:43.881965    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.047017573s)
	I0906 18:30:43.881995    8287 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-724441"
	I0906 18:30:43.882186    8287 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.175116551s)
	I0906 18:30:43.886698    8287 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 18:30:43.886767    8287 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:43.890682    8287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 18:30:43.893538    8287 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0906 18:30:43.896524    8287 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 18:30:43.896554    8287 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 18:30:43.899147    8287 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 18:30:43.899174    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:43.947747    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:43.963567    8287 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 18:30:43.963592    8287 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 18:30:44.096160    8287 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:44.096229    8287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0906 18:30:44.168901    8287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:44.241833    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.399761    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.451558    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:44.741528    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:44.896538    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:44.901648    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:44.945851    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.244309    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:45.397944    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.448787    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:45.663265    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.42261438s)
	I0906 18:30:45.663362    8287 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.494389862s)
	I0906 18:30:45.668584    8287 addons.go:475] Verifying addon gcp-auth=true in "addons-724441"
	I0906 18:30:45.673152    8287 out.go:177] * Verifying gcp-auth addon...
	I0906 18:30:45.676411    8287 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 18:30:45.679572    8287 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 18:30:45.741771    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:45.899391    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:45.945427    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.241249    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:46.395611    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.445733    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:46.783685    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:46.901013    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:46.947015    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:47.241550    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.396749    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:47.397060    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.445934    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:47.782543    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:47.896178    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:47.995778    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.240482    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:48.394714    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.445691    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:48.741287    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:48.896531    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:48.946187    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.282193    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.398279    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:49.400705    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.445911    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.741290    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.897309    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.945225    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.241858    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:50.396716    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.445543    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.741201    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:50.897078    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.945596    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:51.241020    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.396861    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.445238    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:51.741148    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.897211    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.897748    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:51.996486    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.241096    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:52.398467    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.446426    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.740586    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:52.896793    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.946489    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:53.240523    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.399995    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.445915    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:53.782709    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.897491    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.945934    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.282249    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:54.396788    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:54.398358    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:54.445589    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.742360    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:54.904643    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:54.945087    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.241371    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.396168    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.445517    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.741001    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.899812    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.945236    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:56.241647    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:56.399742    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:56.446519    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:56.740763    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:56.897983    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:56.898559    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:56.946327    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.240827    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.396945    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.449244    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.741179    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.896286    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.945061    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:58.241365    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.395392    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.445498    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:58.740880    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.898561    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.898936    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:58.946057    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.263291    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:59.399723    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.445068    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.743189    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:59.898953    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.946113    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.299731    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.400335    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.446864    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.740950    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.900311    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.946246    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:01.241371    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:01.398771    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:01.399989    8287 pod_ready.go:103] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"False"
	I0906 18:31:01.446002    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:01.741249    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:01.897649    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:01.945059    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.246439    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.398992    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.446166    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.740684    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.896268    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.897643    8287 pod_ready.go:93] pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:02.897664    8287 pod_ready.go:82] duration metric: took 29.508856226s for pod "coredns-6f6b679f8f-bcw8r" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:02.897676    8287 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-j9l8m" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:02.899452    8287 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-j9l8m" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-j9l8m" not found
	I0906 18:31:02.899476    8287 pod_ready.go:82] duration metric: took 1.792936ms for pod "coredns-6f6b679f8f-j9l8m" in "kube-system" namespace to be "Ready" ...
	E0906 18:31:02.899487    8287 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-j9l8m" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-j9l8m" not found
	I0906 18:31:02.899494    8287 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-724441" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:02.904162    8287 pod_ready.go:93] pod "etcd-addons-724441" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:02.904183    8287 pod_ready.go:82] duration metric: took 4.682766ms for pod "etcd-addons-724441" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:02.904193    8287 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-724441" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:02.909981    8287 pod_ready.go:93] pod "kube-apiserver-addons-724441" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:02.910056    8287 pod_ready.go:82] duration metric: took 5.853907ms for pod "kube-apiserver-addons-724441" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:02.910090    8287 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-724441" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:02.916029    8287 pod_ready.go:93] pod "kube-controller-manager-addons-724441" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:02.916093    8287 pod_ready.go:82] duration metric: took 5.962026ms for pod "kube-controller-manager-addons-724441" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:02.916119    8287 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6qfvk" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:02.945700    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:03.092950    8287 pod_ready.go:93] pod "kube-proxy-6qfvk" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:03.093019    8287 pod_ready.go:82] duration metric: took 176.874979ms for pod "kube-proxy-6qfvk" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:03.093044    8287 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-724441" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:03.241148    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.394965    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.446233    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:03.493168    8287 pod_ready.go:93] pod "kube-scheduler-addons-724441" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:03.493234    8287 pod_ready.go:82] duration metric: took 400.169237ms for pod "kube-scheduler-addons-724441" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:03.493260    8287 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-n2d4c" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:03.782761    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.895565    8287 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-n2d4c" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:03.895647    8287 pod_ready.go:82] duration metric: took 402.36551ms for pod "nvidia-device-plugin-daemonset-n2d4c" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:03.895673    8287 pod_ready.go:39] duration metric: took 30.526332232s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:31:03.895722    8287 api_server.go:52] waiting for apiserver process to appear ...
	I0906 18:31:03.895822    8287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:31:03.897237    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.913818    8287 api_server.go:72] duration metric: took 34.196128403s to wait for apiserver process to appear ...
	I0906 18:31:03.913843    8287 api_server.go:88] waiting for apiserver healthz status ...
	I0906 18:31:03.913862    8287 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0906 18:31:03.922715    8287 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0906 18:31:03.923977    8287 api_server.go:141] control plane version: v1.31.0
	I0906 18:31:03.924041    8287 api_server.go:131] duration metric: took 10.190076ms to wait for apiserver health ...
	I0906 18:31:03.924066    8287 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 18:31:03.946208    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:04.104264    8287 system_pods.go:59] 17 kube-system pods found
	I0906 18:31:04.104348    8287 system_pods.go:61] "coredns-6f6b679f8f-bcw8r" [990daecd-2875-41ba-80ee-a34f3b6f0cae] Running
	I0906 18:31:04.104374    8287 system_pods.go:61] "csi-hostpath-attacher-0" [77be7a77-44f2-4e6e-8d06-b7e5f9885d5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:31:04.104423    8287 system_pods.go:61] "csi-hostpath-resizer-0" [115cb640-4df9-4e76-ab97-7ebf1fe5c6ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:31:04.104452    8287 system_pods.go:61] "csi-hostpathplugin-wttlw" [7b33574b-e8bf-4731-ae3b-2588e0136228] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:31:04.104476    8287 system_pods.go:61] "etcd-addons-724441" [c18a9806-8e57-4bfb-adf8-f6160c99d374] Running
	I0906 18:31:04.104496    8287 system_pods.go:61] "kube-apiserver-addons-724441" [34d1927e-66e7-4a95-af8c-95a7d7257fe7] Running
	I0906 18:31:04.104528    8287 system_pods.go:61] "kube-controller-manager-addons-724441" [9bf25fc1-a122-411f-a1da-fc2152923bdc] Running
	I0906 18:31:04.104551    8287 system_pods.go:61] "kube-ingress-dns-minikube" [570d7523-20c9-4750-999a-532a65767954] Running
	I0906 18:31:04.104571    8287 system_pods.go:61] "kube-proxy-6qfvk" [13037b3f-2fd1-4a12-820d-34fc35e8bad3] Running
	I0906 18:31:04.104591    8287 system_pods.go:61] "kube-scheduler-addons-724441" [40041771-f5c5-4bf8-a827-54a226f6ea42] Running
	I0906 18:31:04.104613    8287 system_pods.go:61] "metrics-server-84c5f94fbc-kfplw" [116ab06d-003d-4681-8809-50f22c5539e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:31:04.104641    8287 system_pods.go:61] "nvidia-device-plugin-daemonset-n2d4c" [0d57344b-eb6f-437a-9097-2b55afb1f7a1] Running
	I0906 18:31:04.104666    8287 system_pods.go:61] "registry-6fb4cdfc84-f4qv7" [4c34f666-b1de-4d3c-8f16-830242c1fba7] Running
	I0906 18:31:04.104687    8287 system_pods.go:61] "registry-proxy-hmrwx" [5fe8e711-512c-42ae-88ce-cb1b93021495] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:31:04.104711    8287 system_pods.go:61] "snapshot-controller-56fcc65765-ckkbb" [759ac8fb-7a0c-49b9-b54b-ce15443b56d0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:31:04.104746    8287 system_pods.go:61] "snapshot-controller-56fcc65765-vx27w" [3a6c4d32-8dd7-4667-ba4f-7a3c552b4493] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:31:04.104769    8287 system_pods.go:61] "storage-provisioner" [40c71887-0359-4baa-8ee8-1151d7829b1b] Running
	I0906 18:31:04.104792    8287 system_pods.go:74] duration metric: took 180.706416ms to wait for pod list to return data ...
	I0906 18:31:04.104814    8287 default_sa.go:34] waiting for default service account to be created ...
	I0906 18:31:04.243101    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.294928    8287 default_sa.go:45] found service account: "default"
	I0906 18:31:04.294954    8287 default_sa.go:55] duration metric: took 190.119938ms for default service account to be created ...
	I0906 18:31:04.294964    8287 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 18:31:04.397127    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:04.449440    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:04.509246    8287 system_pods.go:86] 17 kube-system pods found
	I0906 18:31:04.509280    8287 system_pods.go:89] "coredns-6f6b679f8f-bcw8r" [990daecd-2875-41ba-80ee-a34f3b6f0cae] Running
	I0906 18:31:04.509292    8287 system_pods.go:89] "csi-hostpath-attacher-0" [77be7a77-44f2-4e6e-8d06-b7e5f9885d5a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:31:04.509299    8287 system_pods.go:89] "csi-hostpath-resizer-0" [115cb640-4df9-4e76-ab97-7ebf1fe5c6ca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:31:04.509309    8287 system_pods.go:89] "csi-hostpathplugin-wttlw" [7b33574b-e8bf-4731-ae3b-2588e0136228] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:31:04.509320    8287 system_pods.go:89] "etcd-addons-724441" [c18a9806-8e57-4bfb-adf8-f6160c99d374] Running
	I0906 18:31:04.509325    8287 system_pods.go:89] "kube-apiserver-addons-724441" [34d1927e-66e7-4a95-af8c-95a7d7257fe7] Running
	I0906 18:31:04.509337    8287 system_pods.go:89] "kube-controller-manager-addons-724441" [9bf25fc1-a122-411f-a1da-fc2152923bdc] Running
	I0906 18:31:04.509342    8287 system_pods.go:89] "kube-ingress-dns-minikube" [570d7523-20c9-4750-999a-532a65767954] Running
	I0906 18:31:04.509347    8287 system_pods.go:89] "kube-proxy-6qfvk" [13037b3f-2fd1-4a12-820d-34fc35e8bad3] Running
	I0906 18:31:04.509363    8287 system_pods.go:89] "kube-scheduler-addons-724441" [40041771-f5c5-4bf8-a827-54a226f6ea42] Running
	I0906 18:31:04.509371    8287 system_pods.go:89] "metrics-server-84c5f94fbc-kfplw" [116ab06d-003d-4681-8809-50f22c5539e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:31:04.509387    8287 system_pods.go:89] "nvidia-device-plugin-daemonset-n2d4c" [0d57344b-eb6f-437a-9097-2b55afb1f7a1] Running
	I0906 18:31:04.509393    8287 system_pods.go:89] "registry-6fb4cdfc84-f4qv7" [4c34f666-b1de-4d3c-8f16-830242c1fba7] Running
	I0906 18:31:04.509400    8287 system_pods.go:89] "registry-proxy-hmrwx" [5fe8e711-512c-42ae-88ce-cb1b93021495] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:31:04.509409    8287 system_pods.go:89] "snapshot-controller-56fcc65765-ckkbb" [759ac8fb-7a0c-49b9-b54b-ce15443b56d0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:31:04.509419    8287 system_pods.go:89] "snapshot-controller-56fcc65765-vx27w" [3a6c4d32-8dd7-4667-ba4f-7a3c552b4493] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:31:04.509427    8287 system_pods.go:89] "storage-provisioner" [40c71887-0359-4baa-8ee8-1151d7829b1b] Running
	I0906 18:31:04.509435    8287 system_pods.go:126] duration metric: took 214.465302ms to wait for k8s-apps to be running ...
	I0906 18:31:04.509448    8287 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 18:31:04.509509    8287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:31:04.536979    8287 system_svc.go:56] duration metric: took 27.520993ms WaitForService to wait for kubelet
	I0906 18:31:04.537016    8287 kubeadm.go:582] duration metric: took 34.819330069s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:31:04.537042    8287 node_conditions.go:102] verifying NodePressure condition ...
	I0906 18:31:04.695186    8287 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 18:31:04.695224    8287 node_conditions.go:123] node cpu capacity is 2
	I0906 18:31:04.695238    8287 node_conditions.go:105] duration metric: took 158.190178ms to run NodePressure ...
	I0906 18:31:04.695250    8287 start.go:241] waiting for startup goroutines ...
	I0906 18:31:04.746614    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.898712    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:04.948539    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:05.244364    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.397570    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.446240    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:05.787245    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.897262    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.945426    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:06.241907    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.396269    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:06.446501    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:06.740903    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.896127    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:06.945607    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:07.243642    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.395731    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.445814    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:07.742030    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.896605    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.946311    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:08.242404    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.395079    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.445899    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:08.741520    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.895834    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.945363    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:09.284453    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.395591    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:09.446372    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:09.781861    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.896220    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:09.945860    8287 kapi.go:107] duration metric: took 27.004162296s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 18:31:10.241732    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.397609    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.747571    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.895722    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.240577    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.395698    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.741410    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.896034    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.241653    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.396288    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.741559    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.896277    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.241048    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.396810    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.771862    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.897713    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.241854    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.396271    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.741795    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.895782    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.241615    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.396691    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.740934    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.896686    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.242596    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.397297    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.746337    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.895511    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.241852    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.396761    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.781438    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.896796    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.240783    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.396396    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.741310    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.896128    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.241446    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.396309    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.741784    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.895720    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.243947    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.404284    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.741128    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.896004    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.241842    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.396224    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.744529    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.896620    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:22.242263    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.399925    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:22.741026    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.896570    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.241076    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.397243    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.741827    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.896283    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:24.240981    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:24.396182    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:24.749951    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:24.899366    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.241955    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.396677    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.742350    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.896989    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.246092    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.396276    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.741671    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.895433    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:27.241604    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.395334    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:27.755911    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.896157    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:28.282751    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.395390    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:28.742116    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.895806    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:29.242782    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.396487    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:29.741042    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.895524    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:30.242612    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.396574    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:30.741346    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.896365    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:31.243830    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.398702    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:31.741226    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.896178    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:32.242826    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.398949    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:32.782970    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.895936    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:33.240520    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.395093    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:33.741884    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.895936    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:34.241527    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.395862    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:34.740547    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.895936    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:35.244099    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.396500    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:35.782389    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.896995    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:36.242043    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.396350    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:36.746072    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.896026    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:37.241940    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.396073    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:37.783733    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.897303    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:38.241931    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.395334    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:38.741325    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.895736    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:39.241312    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:39.396272    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:39.782314    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:39.895157    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:40.240450    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:40.396645    8287 kapi.go:107] duration metric: took 56.505970921s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 18:31:40.740961    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:41.241740    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:41.741731    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:42.241681    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:42.740916    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:43.241125    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:43.741753    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:44.240915    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:44.741035    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:45.242356    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:45.748626    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:46.241544    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:46.782986    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:47.241703    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:47.740328    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:48.241167    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:48.745499    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:49.241481    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:49.741793    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:50.240754    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:50.741703    8287 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:51.285892    8287 kapi.go:107] duration metric: took 1m11.049442349s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 18:32:08.719394    8287 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 18:32:08.719433    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:09.180343    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:09.680597    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:10.180673    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:10.680659    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:11.180151    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:11.679729    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:12.179984    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:12.680429    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:13.180431    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:13.680037    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:14.180974    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:14.680215    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:15.180289    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:15.680745    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:16.180682    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:16.680859    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:17.180029    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:17.680674    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:18.180827    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:18.679989    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:19.180569    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:19.680303    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:20.180240    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:20.680122    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:21.180079    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:21.679986    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:22.179714    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:22.680125    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:23.180608    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:23.681002    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:24.182134    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:24.681472    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:25.180231    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:25.680141    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:26.179452    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:26.679365    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:27.180868    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:27.680495    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:28.180313    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:28.680460    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:29.180111    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:29.680256    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:30.180661    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:30.680247    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:31.180257    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:31.679562    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:32.181106    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:32.679837    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:33.179714    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:33.680151    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:34.180017    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:34.680118    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:35.180724    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:35.680499    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:36.180094    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:36.679692    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:37.180065    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:37.681363    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:38.180665    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:38.680431    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:39.180071    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:39.680531    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:40.180685    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:40.680908    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:41.180713    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:41.679906    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:42.180108    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:42.680507    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:43.180957    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:43.680520    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:44.180702    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:44.679792    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:45.180904    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:45.680585    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:46.180158    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:46.680700    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:47.180362    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:47.679952    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:48.180743    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:48.686930    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:49.181373    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:49.680878    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:50.179673    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:50.679886    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:51.181349    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:51.680263    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:52.180753    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:52.680992    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:53.180345    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:53.680401    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:54.179868    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:54.680243    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:55.180880    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:55.680930    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:56.179897    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:56.680172    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:57.179739    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:57.680756    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:58.180505    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:58.680017    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:59.179520    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:59.680754    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:00.216726    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:00.680296    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:01.181022    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:01.679764    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:02.180421    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:02.679692    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:03.180451    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:03.679476    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:04.180699    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:04.679552    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:05.180337    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:05.679753    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:06.179698    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:06.680580    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:07.179813    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:07.680808    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:08.180973    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:08.680478    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:09.179391    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:09.679615    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:10.180688    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:10.681191    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:11.180238    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:11.679708    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:12.180753    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:12.680299    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:13.181082    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:13.679791    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:14.179435    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:14.680187    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:15.181064    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:15.680611    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:16.180111    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:16.681810    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:17.180177    8287 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:17.680031    8287 kapi.go:107] duration metric: took 2m32.003620071s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 18:33:17.682954    8287 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-724441 cluster.
	I0906 18:33:17.685427    8287 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 18:33:17.688103    8287 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 18:33:17.690998    8287 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner-rancher, volcano, storage-provisioner, ingress-dns, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0906 18:33:17.693583    8287 addons.go:510] duration metric: took 2m47.974869589s for enable addons: enabled=[cloud-spanner storage-provisioner-rancher volcano storage-provisioner ingress-dns nvidia-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0906 18:33:17.693624    8287 start.go:246] waiting for cluster config update ...
	I0906 18:33:17.693655    8287 start.go:255] writing updated cluster config ...
	I0906 18:33:17.693949    8287 ssh_runner.go:195] Run: rm -f paused
	I0906 18:33:18.093908    8287 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 18:33:18.096683    8287 out.go:177] * Done! kubectl is now configured to use "addons-724441" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 06 18:42:55 addons-724441 dockerd[1282]: time="2024-09-06T18:42:55.176126987Z" level=info msg="ignoring event" container=dc2215861affb41b6194dc5bfc0fff245876c4f3a51cc6fc1e15c3ea55b018c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:42:55 addons-724441 dockerd[1282]: time="2024-09-06T18:42:55.183708558Z" level=info msg="ignoring event" container=62219527bd4c1dfe762fbc61280a3048665b1bfe94867dffa12781cc33bea0eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:42:55 addons-724441 dockerd[1282]: time="2024-09-06T18:42:55.366944592Z" level=info msg="ignoring event" container=6cdbcb7a2223ca6ce399fc27856ba960d7390e256877bc0c5e4b637ed61b2e32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:42:55 addons-724441 dockerd[1282]: time="2024-09-06T18:42:55.413844832Z" level=info msg="ignoring event" container=89212b71c749fe93d2a657e9fbb8a37a9a21604f2871f9b99776e5eb3168c30e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:42:56 addons-724441 dockerd[1282]: time="2024-09-06T18:42:56.997046384Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 06 18:42:57 addons-724441 dockerd[1282]: time="2024-09-06T18:42:56.999959202Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 06 18:43:01 addons-724441 dockerd[1282]: time="2024-09-06T18:43:01.890541557Z" level=info msg="ignoring event" container=61c43886af7543cab8e99b8ddd5abc034488095ab7c0386bf7e1d1a81b028151 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:02 addons-724441 dockerd[1282]: time="2024-09-06T18:43:02.064161198Z" level=info msg="ignoring event" container=f41e98b1874146f7f1a161b7e74f0f950391844cbddc205e6b4fc49a799ab0ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:02 addons-724441 cri-dockerd[1539]: time="2024-09-06T18:43:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/19dd75bd034987281ac1bbbbc2dbd4c45ab0456d01ecada67bbfe2d83d3686b1/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 06 18:43:02 addons-724441 dockerd[1282]: time="2024-09-06T18:43:02.985306899Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 06 18:43:03 addons-724441 cri-dockerd[1539]: time="2024-09-06T18:43:03Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 06 18:43:03 addons-724441 dockerd[1282]: time="2024-09-06T18:43:03.706482612Z" level=info msg="ignoring event" container=d8ecef5216bb50c4167f7196a99b107c591c8d7ca90e2017ce5d8671c1c119c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:05 addons-724441 dockerd[1282]: time="2024-09-06T18:43:05.890234924Z" level=info msg="ignoring event" container=19dd75bd034987281ac1bbbbc2dbd4c45ab0456d01ecada67bbfe2d83d3686b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:07 addons-724441 cri-dockerd[1539]: time="2024-09-06T18:43:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3bb23b3ee099408f982998b23424782d8ebcb818cda4583aa400d926721cef03/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 06 18:43:08 addons-724441 cri-dockerd[1539]: time="2024-09-06T18:43:08Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 06 18:43:08 addons-724441 dockerd[1282]: time="2024-09-06T18:43:08.887242491Z" level=info msg="ignoring event" container=53e3118fe9bddc4e00ae2589a7311f6957bd2ebcca2219044b2aa4f1c27f9703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:11 addons-724441 dockerd[1282]: time="2024-09-06T18:43:11.063568654Z" level=info msg="ignoring event" container=3bb23b3ee099408f982998b23424782d8ebcb818cda4583aa400d926721cef03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:12 addons-724441 cri-dockerd[1539]: time="2024-09-06T18:43:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4e0bfaa82e37792f4607b171898e88f0fcccd5e826c4e6a0199d6e8b3b0bc730/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 06 18:43:12 addons-724441 dockerd[1282]: time="2024-09-06T18:43:12.749082145Z" level=info msg="ignoring event" container=e882606893c2d3e9a54205cd97c7fde34c14e840e8a59ba8825d98fb8d87b9b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:14 addons-724441 dockerd[1282]: time="2024-09-06T18:43:14.120145478Z" level=info msg="ignoring event" container=4e0bfaa82e37792f4607b171898e88f0fcccd5e826c4e6a0199d6e8b3b0bc730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:15 addons-724441 dockerd[1282]: time="2024-09-06T18:43:15.389988790Z" level=info msg="ignoring event" container=0c86386fc1ecefc512379297425c8a50f31292f606db3c9e025b7c43a1e502c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:16 addons-724441 dockerd[1282]: time="2024-09-06T18:43:16.046087958Z" level=info msg="ignoring event" container=738ce26fe08126f3b938e4bed6d22e1bb157af79cff9e7ade8c6a56ba4c5b505 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:16 addons-724441 dockerd[1282]: time="2024-09-06T18:43:16.175974151Z" level=info msg="ignoring event" container=8077219f4e2c34fcb2fc2856a4aaff03297c8cdbc6afe9745da5716b7d22466c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:16 addons-724441 dockerd[1282]: time="2024-09-06T18:43:16.317316057Z" level=info msg="ignoring event" container=54af438dbdc053fc98b22df4f0174c90f36cca27c4990aa43f6fd0e96337c2b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 06 18:43:16 addons-724441 dockerd[1282]: time="2024-09-06T18:43:16.432750592Z" level=info msg="ignoring event" container=2627836df557ff0efe1a3e794069165264d151d188ba86e3dd11ecaa22c5f37c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e882606893c2d       fc9db2894f4e4                                                                                                                5 seconds ago       Exited              helper-pod                0                   4e0bfaa82e377       helper-pod-delete-pvc-7aca5f61-b674-43cb-8d89-d088d3ea181f
	53e3118fe9bdd       busybox@sha256:34b191d63fbc93e25e275bfccf1b5365664e5ac28f06d974e8d50090fbb49f41                                              9 seconds ago       Exited              busybox                   0                   3bb23b3ee0994       test-local-path
	7003f6536a23b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            43 seconds ago      Exited              gadget                    7                   3bec634911a4a       gadget-6dprc
	52c15cb5f4b38       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   cb7081beee8ae       gcp-auth-89d5ffd79-g8gdc
	651d73d782077       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   a5de5f89ab681       ingress-nginx-controller-bc57996ff-dk6m7
	1e1a25355d42f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   ffff393dcc701       ingress-nginx-admission-patch-rhw4f
	4d684bd5d36a7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   e0d3c61fe84e9       ingress-nginx-admission-create-rcj22
	32ad318078558       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   4eb453096135c       local-path-provisioner-86d989889c-6hmf9
	6be1d8489d474       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server            0                   37f99a050998d       metrics-server-84c5f94fbc-kfplw
	9533863ce8022       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   c4072fdec454c       kube-ingress-dns-minikube
	3b271dd34504c       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator    0                   6592b3344160d       cloud-spanner-emulator-769b77f747-gq6gh
	72749a32d239f       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       0                   ce3a4b9e2ffbc       storage-provisioner
	398a0debb2bc7       2437cf7621777                                                                                                                12 minutes ago      Running             coredns                   0                   3b9a34fecba70       coredns-6f6b679f8f-bcw8r
	cd4ab6f6d1fd7       71d55d66fd4ee                                                                                                                12 minutes ago      Running             kube-proxy                0                   4aad8202a74a6       kube-proxy-6qfvk
	1cc004b59a28e       fcb0683e6bdbd                                                                                                                13 minutes ago      Running             kube-controller-manager   0                   1d51754fdcaf9       kube-controller-manager-addons-724441
	c9c0b0baf9d0e       27e3830e14027                                                                                                                13 minutes ago      Running             etcd                      0                   fd819376d2e81       etcd-addons-724441
	eb551564c01b4       cd0f0ae0ec9e0                                                                                                                13 minutes ago      Running             kube-apiserver            0                   e25a077b562a2       kube-apiserver-addons-724441
	7b697ae4e1620       fbbbd428abb4d                                                                                                                13 minutes ago      Running             kube-scheduler            0                   b05e21cc3a74d       kube-scheduler-addons-724441
	
	
	==> controller_ingress [651d73d78207] <==
	NGINX Ingress controller
	  Release:       v1.11.2
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	I0906 18:31:50.933823       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/arm64"
	I0906 18:31:51.105665       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0906 18:31:51.131783       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0906 18:31:51.150644       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0906 18:31:51.162443       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a28837df-2df5-4836-b926-1e1cc0eed705", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0906 18:31:51.175354       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"cc60df73-bafb-4906-a46b-2d8ed3d3278c", APIVersion:"v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0906 18:31:51.175837       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"a8679235-4357-4b69-9cb3-0c684a328e77", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0906 18:31:52.352081       7 nginx.go:317] "Starting NGINX process"
	I0906 18:31:52.353086       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0906 18:31:52.359451       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0906 18:31:52.360759       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0906 18:31:52.373132       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0906 18:31:52.373710       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-dk6m7"
	I0906 18:31:52.385146       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-dk6m7" node="addons-724441"
	I0906 18:31:52.423446       7 controller.go:213] "Backend successfully reloaded"
	I0906 18:31:52.423625       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0906 18:31:52.423788       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-dk6m7", UID:"c02b9b99-f068-422a-b41f-8cd0267f8fa2", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [398a0debb2bc] <==
	[INFO] 10.244.0.8:44454 - 18659 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072163s
	[INFO] 10.244.0.8:39538 - 15089 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002669993s
	[INFO] 10.244.0.8:39538 - 31219 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003035898s
	[INFO] 10.244.0.8:58172 - 8127 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000121509s
	[INFO] 10.244.0.8:58172 - 32184 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000097459s
	[INFO] 10.244.0.8:49296 - 28340 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000144738s
	[INFO] 10.244.0.8:49296 - 49072 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000054506s
	[INFO] 10.244.0.8:43564 - 19232 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007099s
	[INFO] 10.244.0.8:43564 - 33582 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000053341s
	[INFO] 10.244.0.8:36886 - 40495 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000065928s
	[INFO] 10.244.0.8:36886 - 25389 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050363s
	[INFO] 10.244.0.8:56111 - 53094 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001775748s
	[INFO] 10.244.0.8:56111 - 63844 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001776338s
	[INFO] 10.244.0.8:53413 - 13995 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000124553s
	[INFO] 10.244.0.8:53413 - 17828 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116758s
	[INFO] 10.244.0.25:52551 - 28457 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000240746s
	[INFO] 10.244.0.25:59265 - 61399 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000137607s
	[INFO] 10.244.0.25:37677 - 41128 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120041s
	[INFO] 10.244.0.25:48302 - 8014 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122362s
	[INFO] 10.244.0.25:34443 - 10758 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088533s
	[INFO] 10.244.0.25:37860 - 36055 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120738s
	[INFO] 10.244.0.25:41910 - 44552 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00252014s
	[INFO] 10.244.0.25:39285 - 58085 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002253769s
	[INFO] 10.244.0.25:54087 - 19465 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001921117s
	[INFO] 10.244.0.25:55063 - 36231 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001731719s
	
	
	==> describe nodes <==
	Name:               addons-724441
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-724441
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=addons-724441
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T18_30_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-724441
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:30:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-724441
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:43:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:39:04 +0000   Fri, 06 Sep 2024 18:30:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:39:04 +0000   Fri, 06 Sep 2024 18:30:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:39:04 +0000   Fri, 06 Sep 2024 18:30:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:39:04 +0000   Fri, 06 Sep 2024 18:30:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-724441
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0278d7292e38433c8dcae8463b30be2b
	  System UUID:                d9db8731-6658-4be1-862e-abcfc5d174db
	  Boot ID:                    e4f6c2d6-2311-45f5-b0f2-7344dd80c644
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-769b77f747-gq6gh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-6dprc                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-g8gdc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-dk6m7    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-bcw8r                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-724441                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-724441                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-724441       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-6qfvk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-724441                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-kfplw             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-6hmf9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-724441 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-724441 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-724441 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-724441 event: Registered Node addons-724441 in Controller
	
	
	==> dmesg <==
	[Sep 6 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015612] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.450027] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.723664] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.563718] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [c9c0b0baf9d0] <==
	{"level":"info","ts":"2024-09-06T18:30:18.059955Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T18:30:18.060000Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T18:30:18.301420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-06T18:30:18.301632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-06T18:30:18.301799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-06T18:30:18.301904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:18.302050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:18.302158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:18.302284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:18.304732Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:18.309600Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-724441 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T18:30:18.309775Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T18:30:18.310282Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:18.310494Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:18.310631Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:18.310723Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T18:30:18.311566Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T18:30:18.319663Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-06T18:30:18.320731Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T18:30:18.321777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-06T18:30:18.329447Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T18:30:18.329647Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T18:40:19.732162Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1845}
	{"level":"info","ts":"2024-09-06T18:40:19.783819Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1845,"took":"51.119702ms","hash":820712249,"current-db-size-bytes":8814592,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4829184,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-06T18:40:19.783873Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":820712249,"revision":1845,"compact-revision":-1}
	
	
	==> gcp-auth [52c15cb5f4b3] <==
	2024/09/06 18:33:16 GCP Auth Webhook started!
	2024/09/06 18:33:35 Ready to marshal response ...
	2024/09/06 18:33:35 Ready to write response ...
	2024/09/06 18:33:36 Ready to marshal response ...
	2024/09/06 18:33:36 Ready to write response ...
	2024/09/06 18:34:00 Ready to marshal response ...
	2024/09/06 18:34:00 Ready to write response ...
	2024/09/06 18:34:00 Ready to marshal response ...
	2024/09/06 18:34:00 Ready to write response ...
	2024/09/06 18:34:00 Ready to marshal response ...
	2024/09/06 18:34:00 Ready to write response ...
	2024/09/06 18:42:15 Ready to marshal response ...
	2024/09/06 18:42:15 Ready to write response ...
	2024/09/06 18:42:17 Ready to marshal response ...
	2024/09/06 18:42:17 Ready to write response ...
	2024/09/06 18:42:38 Ready to marshal response ...
	2024/09/06 18:42:38 Ready to write response ...
	2024/09/06 18:43:02 Ready to marshal response ...
	2024/09/06 18:43:02 Ready to write response ...
	2024/09/06 18:43:02 Ready to marshal response ...
	2024/09/06 18:43:02 Ready to write response ...
	2024/09/06 18:43:11 Ready to marshal response ...
	2024/09/06 18:43:11 Ready to write response ...
	
	
	==> kernel <==
	 18:43:17 up 25 min,  0 users,  load average: 1.58, 0.89, 0.73
	Linux addons-724441 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [eb551564c01b] <==
	I0906 18:33:51.392872       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0906 18:33:51.424094       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0906 18:33:51.489996       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0906 18:33:51.671927       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0906 18:33:51.995525       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0906 18:33:52.168548       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0906 18:33:52.192479       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0906 18:33:52.254285       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0906 18:33:52.490730       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0906 18:33:52.774314       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0906 18:42:25.242327       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0906 18:42:54.850216       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:42:54.850272       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:42:54.888750       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:42:54.888882       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:42:54.948911       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:42:54.948976       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0906 18:42:55.018074       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0906 18:42:55.018138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0906 18:42:55.893564       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0906 18:42:56.027495       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0906 18:42:56.132806       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0906 18:43:12.931446       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0906 18:43:12.942969       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0906 18:43:12.954012       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [1cc004b59a28] <==
	I0906 18:42:59.615635       1 shared_informer.go:320] Caches are synced for resource quota
	I0906 18:42:59.744190       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0906 18:42:59.744232       1 shared_informer.go:320] Caches are synced for garbage collector
	W0906 18:42:59.845164       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:42:59.845207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:00.328850       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:00.328900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:04.137208       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:04.137301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:04.868252       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:04.868296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:06.115911       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:06.115952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:06.339303       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:06.339380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:11.257851       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:11.257894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:43:12.571995       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="20.356µs"
	W0906 18:43:13.468934       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:13.468975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0906 18:43:15.960195       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="4.456µs"
	W0906 18:43:16.870049       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:16.870101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0906 18:43:16.930957       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0906 18:43:16.930999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [cd4ab6f6d1fd] <==
	I0906 18:30:31.527222       1 server_linux.go:66] "Using iptables proxy"
	I0906 18:30:31.597831       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0906 18:30:31.597919       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:30:31.693598       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0906 18:30:31.693659       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:30:31.696359       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:30:31.696666       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:30:31.696681       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:30:31.698078       1 config.go:197] "Starting service config controller"
	I0906 18:30:31.698106       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:30:31.698127       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:30:31.698132       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:30:31.698857       1 config.go:326] "Starting node config controller"
	I0906 18:30:31.698867       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:30:31.798184       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 18:30:31.798233       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:30:31.799489       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7b697ae4e162] <==
	W0906 18:30:22.412725       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:22.412747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:22.412824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:22.412842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:22.412938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 18:30:22.412953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:22.412993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0906 18:30:22.413011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:22.413048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 18:30:22.413067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:22.413104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 18:30:22.413120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:22.413160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0906 18:30:22.413175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:22.413281       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 18:30:22.413300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:22.413345       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0906 18:30:22.413365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:22.414316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:22.414356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:22.414490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 18:30:22.414510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:22.414570       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:22.414590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0906 18:30:23.905027       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 18:43:14 addons-724441 kubelet[2318]: I0906 18:43:14.294694    2318 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6e795c49-bee5-499a-abda-93e975b1a4a6-gcp-creds\") on node \"addons-724441\" DevicePath \"\""
	Sep 06 18:43:14 addons-724441 kubelet[2318]: I0906 18:43:14.719339    2318 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-f4qv7" secret="" err="secret \"gcp-auth\" not found"
	Sep 06 18:43:15 addons-724441 kubelet[2318]: I0906 18:43:15.068385    2318 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e0bfaa82e37792f4607b171898e88f0fcccd5e826c4e6a0199d6e8b3b0bc730"
	Sep 06 18:43:15 addons-724441 kubelet[2318]: I0906 18:43:15.503130    2318 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6d62302a-66a0-4c70-a3ac-9564bf734cb6-gcp-creds\") pod \"6d62302a-66a0-4c70-a3ac-9564bf734cb6\" (UID: \"6d62302a-66a0-4c70-a3ac-9564bf734cb6\") "
	Sep 06 18:43:15 addons-724441 kubelet[2318]: I0906 18:43:15.503205    2318 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf5cd\" (UniqueName: \"kubernetes.io/projected/6d62302a-66a0-4c70-a3ac-9564bf734cb6-kube-api-access-wf5cd\") pod \"6d62302a-66a0-4c70-a3ac-9564bf734cb6\" (UID: \"6d62302a-66a0-4c70-a3ac-9564bf734cb6\") "
	Sep 06 18:43:15 addons-724441 kubelet[2318]: I0906 18:43:15.503610    2318 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d62302a-66a0-4c70-a3ac-9564bf734cb6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "6d62302a-66a0-4c70-a3ac-9564bf734cb6" (UID: "6d62302a-66a0-4c70-a3ac-9564bf734cb6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 06 18:43:15 addons-724441 kubelet[2318]: I0906 18:43:15.505482    2318 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d62302a-66a0-4c70-a3ac-9564bf734cb6-kube-api-access-wf5cd" (OuterVolumeSpecName: "kube-api-access-wf5cd") pod "6d62302a-66a0-4c70-a3ac-9564bf734cb6" (UID: "6d62302a-66a0-4c70-a3ac-9564bf734cb6"). InnerVolumeSpecName "kube-api-access-wf5cd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:43:15 addons-724441 kubelet[2318]: I0906 18:43:15.603599    2318 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6d62302a-66a0-4c70-a3ac-9564bf734cb6-gcp-creds\") on node \"addons-724441\" DevicePath \"\""
	Sep 06 18:43:15 addons-724441 kubelet[2318]: I0906 18:43:15.603641    2318 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wf5cd\" (UniqueName: \"kubernetes.io/projected/6d62302a-66a0-4c70-a3ac-9564bf734cb6-kube-api-access-wf5cd\") on node \"addons-724441\" DevicePath \"\""
	Sep 06 18:43:16 addons-724441 kubelet[2318]: I0906 18:43:16.520309    2318 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngtvk\" (UniqueName: \"kubernetes.io/projected/4c34f666-b1de-4d3c-8f16-830242c1fba7-kube-api-access-ngtvk\") pod \"4c34f666-b1de-4d3c-8f16-830242c1fba7\" (UID: \"4c34f666-b1de-4d3c-8f16-830242c1fba7\") "
	Sep 06 18:43:16 addons-724441 kubelet[2318]: I0906 18:43:16.522720    2318 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c34f666-b1de-4d3c-8f16-830242c1fba7-kube-api-access-ngtvk" (OuterVolumeSpecName: "kube-api-access-ngtvk") pod "4c34f666-b1de-4d3c-8f16-830242c1fba7" (UID: "4c34f666-b1de-4d3c-8f16-830242c1fba7"). InnerVolumeSpecName "kube-api-access-ngtvk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:43:16 addons-724441 kubelet[2318]: I0906 18:43:16.621963    2318 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccqgx\" (UniqueName: \"kubernetes.io/projected/5fe8e711-512c-42ae-88ce-cb1b93021495-kube-api-access-ccqgx\") pod \"5fe8e711-512c-42ae-88ce-cb1b93021495\" (UID: \"5fe8e711-512c-42ae-88ce-cb1b93021495\") "
	Sep 06 18:43:16 addons-724441 kubelet[2318]: I0906 18:43:16.622052    2318 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ngtvk\" (UniqueName: \"kubernetes.io/projected/4c34f666-b1de-4d3c-8f16-830242c1fba7-kube-api-access-ngtvk\") on node \"addons-724441\" DevicePath \"\""
	Sep 06 18:43:16 addons-724441 kubelet[2318]: I0906 18:43:16.624241    2318 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe8e711-512c-42ae-88ce-cb1b93021495-kube-api-access-ccqgx" (OuterVolumeSpecName: "kube-api-access-ccqgx") pod "5fe8e711-512c-42ae-88ce-cb1b93021495" (UID: "5fe8e711-512c-42ae-88ce-cb1b93021495"). InnerVolumeSpecName "kube-api-access-ccqgx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 06 18:43:16 addons-724441 kubelet[2318]: I0906 18:43:16.723306    2318 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ccqgx\" (UniqueName: \"kubernetes.io/projected/5fe8e711-512c-42ae-88ce-cb1b93021495-kube-api-access-ccqgx\") on node \"addons-724441\" DevicePath \"\""
	Sep 06 18:43:16 addons-724441 kubelet[2318]: E0906 18:43:16.726275    2318 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="41920f3b-c32b-41b2-8377-de8a173e7782"
	Sep 06 18:43:16 addons-724441 kubelet[2318]: I0906 18:43:16.736000    2318 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d62302a-66a0-4c70-a3ac-9564bf734cb6" path="/var/lib/kubelet/pods/6d62302a-66a0-4c70-a3ac-9564bf734cb6/volumes"
	Sep 06 18:43:17 addons-724441 kubelet[2318]: I0906 18:43:17.119362    2318 scope.go:117] "RemoveContainer" containerID="8077219f4e2c34fcb2fc2856a4aaff03297c8cdbc6afe9745da5716b7d22466c"
	Sep 06 18:43:17 addons-724441 kubelet[2318]: I0906 18:43:17.194490    2318 scope.go:117] "RemoveContainer" containerID="8077219f4e2c34fcb2fc2856a4aaff03297c8cdbc6afe9745da5716b7d22466c"
	Sep 06 18:43:17 addons-724441 kubelet[2318]: E0906 18:43:17.195680    2318 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8077219f4e2c34fcb2fc2856a4aaff03297c8cdbc6afe9745da5716b7d22466c" containerID="8077219f4e2c34fcb2fc2856a4aaff03297c8cdbc6afe9745da5716b7d22466c"
	Sep 06 18:43:17 addons-724441 kubelet[2318]: I0906 18:43:17.195763    2318 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8077219f4e2c34fcb2fc2856a4aaff03297c8cdbc6afe9745da5716b7d22466c"} err="failed to get container status \"8077219f4e2c34fcb2fc2856a4aaff03297c8cdbc6afe9745da5716b7d22466c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8077219f4e2c34fcb2fc2856a4aaff03297c8cdbc6afe9745da5716b7d22466c"
	Sep 06 18:43:17 addons-724441 kubelet[2318]: I0906 18:43:17.195982    2318 scope.go:117] "RemoveContainer" containerID="738ce26fe08126f3b938e4bed6d22e1bb157af79cff9e7ade8c6a56ba4c5b505"
	Sep 06 18:43:17 addons-724441 kubelet[2318]: I0906 18:43:17.219733    2318 scope.go:117] "RemoveContainer" containerID="738ce26fe08126f3b938e4bed6d22e1bb157af79cff9e7ade8c6a56ba4c5b505"
	Sep 06 18:43:17 addons-724441 kubelet[2318]: E0906 18:43:17.221108    2318 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 738ce26fe08126f3b938e4bed6d22e1bb157af79cff9e7ade8c6a56ba4c5b505" containerID="738ce26fe08126f3b938e4bed6d22e1bb157af79cff9e7ade8c6a56ba4c5b505"
	Sep 06 18:43:17 addons-724441 kubelet[2318]: I0906 18:43:17.221163    2318 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"738ce26fe08126f3b938e4bed6d22e1bb157af79cff9e7ade8c6a56ba4c5b505"} err="failed to get container status \"738ce26fe08126f3b938e4bed6d22e1bb157af79cff9e7ade8c6a56ba4c5b505\": rpc error: code = Unknown desc = Error response from daemon: No such container: 738ce26fe08126f3b938e4bed6d22e1bb157af79cff9e7ade8c6a56ba4c5b505"
	
	
	==> storage-provisioner [72749a32d239] <==
	I0906 18:30:37.746871       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:30:37.769441       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:30:37.769502       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 18:30:37.777095       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 18:30:37.778153       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-724441_00e312fb-6498-497a-bfe0-e8a03d83d18a!
	I0906 18:30:37.779906       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c80d298-ff0e-4a35-b4b6-5fccde44ec10", APIVersion:"v1", ResourceVersion:"560", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-724441_00e312fb-6498-497a-bfe0-e8a03d83d18a became leader
	I0906 18:30:37.879660       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-724441_00e312fb-6498-497a-bfe0-e8a03d83d18a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-724441 -n addons-724441
helpers_test.go:261: (dbg) Run:  kubectl --context addons-724441 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-rcj22 ingress-nginx-admission-patch-rhw4f
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-724441 describe pod busybox ingress-nginx-admission-create-rcj22 ingress-nginx-admission-patch-rhw4f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-724441 describe pod busybox ingress-nginx-admission-create-rcj22 ingress-nginx-admission-patch-rhw4f: exit status 1 (97.358602ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-724441/192.168.49.2
	Start Time:       Fri, 06 Sep 2024 18:34:00 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdhgf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mdhgf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-724441
	  Warning  Failed     7m55s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m40s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m40s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m40s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m12s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rcj22" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rhw4f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-724441 describe pod busybox ingress-nginx-admission-create-rcj22 ingress-nginx-admission-patch-rhw4f: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-linux-arm64 license: exit status 40 (227.938515ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.23s)

                                                
                                    

Test pass (317/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.41
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 5.57
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.22
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
22 TestOffline 90.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 221.38
29 TestAddons/serial/Volcano 42.29
31 TestAddons/serial/GCPAuth/Namespaces 0.18
34 TestAddons/parallel/Ingress 19.52
35 TestAddons/parallel/InspektorGadget 10.82
36 TestAddons/parallel/MetricsServer 5.74
39 TestAddons/parallel/CSI 40.67
40 TestAddons/parallel/Headlamp 16.62
41 TestAddons/parallel/CloudSpanner 5.47
42 TestAddons/parallel/LocalPath 53.43
43 TestAddons/parallel/NvidiaDevicePlugin 6.45
44 TestAddons/parallel/Yakd 10.69
45 TestAddons/StoppedEnableDisable 6.02
46 TestCertOptions 42.86
47 TestCertExpiration 246.56
48 TestDockerFlags 34.71
49 TestForceSystemdFlag 48.42
50 TestForceSystemdEnv 41.81
56 TestErrorSpam/setup 30.58
57 TestErrorSpam/start 0.85
58 TestErrorSpam/status 0.98
59 TestErrorSpam/pause 1.37
60 TestErrorSpam/unpause 1.47
61 TestErrorSpam/stop 2.21
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 75.55
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 27.76
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.35
73 TestFunctional/serial/CacheCmd/cache/add_local 0.98
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 43.67
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.14
84 TestFunctional/serial/LogsFileCmd 1.18
85 TestFunctional/serial/InvalidService 5.01
87 TestFunctional/parallel/ConfigCmd 0.45
88 TestFunctional/parallel/DashboardCmd 12.26
89 TestFunctional/parallel/DryRun 0.42
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.1
95 TestFunctional/parallel/ServiceCmdConnect 12.61
96 TestFunctional/parallel/AddonsCmd 0.22
97 TestFunctional/parallel/PersistentVolumeClaim 28.77
99 TestFunctional/parallel/SSHCmd 0.83
100 TestFunctional/parallel/CpCmd 2.23
102 TestFunctional/parallel/FileSync 0.32
103 TestFunctional/parallel/CertSync 2.11
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.43
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.25
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
125 TestFunctional/parallel/ProfileCmd/profile_list 0.42
126 TestFunctional/parallel/ServiceCmd/List 0.55
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
129 TestFunctional/parallel/MountCmd/any-port 8.04
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
131 TestFunctional/parallel/ServiceCmd/Format 0.44
132 TestFunctional/parallel/ServiceCmd/URL 0.48
133 TestFunctional/parallel/MountCmd/specific-port 2.34
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.42
135 TestFunctional/parallel/Version/short 0.06
136 TestFunctional/parallel/Version/components 1.15
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.23
142 TestFunctional/parallel/ImageCommands/Setup 0.71
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.98
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
148 TestFunctional/parallel/DockerEnv/bash 1.27
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.27
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
154 TestFunctional/delete_echo-server_images 0.09
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 125.35
161 TestMultiControlPlane/serial/DeployApp 45.75
162 TestMultiControlPlane/serial/PingHostFromPods 1.75
163 TestMultiControlPlane/serial/AddWorkerNode 25.89
164 TestMultiControlPlane/serial/NodeLabels 0.11
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
166 TestMultiControlPlane/serial/CopyFile 18.97
167 TestMultiControlPlane/serial/StopSecondaryNode 11.68
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
169 TestMultiControlPlane/serial/RestartSecondaryNode 70.25
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.77
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 204.71
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.34
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
174 TestMultiControlPlane/serial/StopCluster 33.06
175 TestMultiControlPlane/serial/RestartCluster 157.02
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.56
177 TestMultiControlPlane/serial/AddSecondaryNode 45.13
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.74
181 TestImageBuild/serial/Setup 34.8
182 TestImageBuild/serial/NormalBuild 1.98
183 TestImageBuild/serial/BuildWithBuildArg 1.03
184 TestImageBuild/serial/BuildWithDockerIgnore 0.98
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.83
189 TestJSONOutput/start/Command 44.63
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.62
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.5
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.99
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.21
214 TestKicCustomNetwork/create_custom_network 32.77
215 TestKicCustomNetwork/use_default_bridge_network 33.6
216 TestKicExistingNetwork 30.64
217 TestKicCustomSubnet 33.37
218 TestKicStaticIP 34.48
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 66.35
223 TestMountStart/serial/StartWithMountFirst 8.56
224 TestMountStart/serial/VerifyMountFirst 0.25
225 TestMountStart/serial/StartWithMountSecond 7.68
226 TestMountStart/serial/VerifyMountSecond 0.24
227 TestMountStart/serial/DeleteFirst 1.49
228 TestMountStart/serial/VerifyMountPostDelete 0.25
229 TestMountStart/serial/Stop 1.21
230 TestMountStart/serial/RestartStopped 8.89
231 TestMountStart/serial/VerifyMountPostStop 0.25
234 TestMultiNode/serial/FreshStart2Nodes 84.21
235 TestMultiNode/serial/DeployApp2Nodes 54.48
236 TestMultiNode/serial/PingHostFrom2Pods 1.04
237 TestMultiNode/serial/AddNode 18.54
238 TestMultiNode/serial/MultiNodeLabels 0.15
239 TestMultiNode/serial/ProfileList 0.47
240 TestMultiNode/serial/CopyFile 9.91
241 TestMultiNode/serial/StopNode 2.31
242 TestMultiNode/serial/StartAfterStop 11.09
243 TestMultiNode/serial/RestartKeepsNodes 104.9
244 TestMultiNode/serial/DeleteNode 5.57
245 TestMultiNode/serial/StopMultiNode 21.71
246 TestMultiNode/serial/RestartMultiNode 58.95
247 TestMultiNode/serial/ValidateNameConflict 35.62
252 TestPreload 144.92
254 TestScheduledStopUnix 105.24
255 TestSkaffold 118.71
257 TestInsufficientStorage 11.42
258 TestRunningBinaryUpgrade 79.46
260 TestKubernetesUpgrade 377.13
261 TestMissingContainerUpgrade 157.91
263 TestPause/serial/Start 82.4
264 TestPause/serial/SecondStartNoReconfiguration 39.2
265 TestPause/serial/Pause 0.81
266 TestPause/serial/VerifyStatus 0.45
267 TestPause/serial/Unpause 0.68
268 TestPause/serial/PauseAgain 0.87
269 TestPause/serial/DeletePaused 2.4
270 TestPause/serial/VerifyDeletedResources 0.16
271 TestStoppedBinaryUpgrade/Setup 0.63
272 TestStoppedBinaryUpgrade/Upgrade 84.61
273 TestStoppedBinaryUpgrade/MinikubeLogs 1.68
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
283 TestNoKubernetes/serial/StartWithK8s 45.94
284 TestNoKubernetes/serial/StartWithStopK8s 15.15
296 TestNoKubernetes/serial/Start 8.42
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
298 TestNoKubernetes/serial/ProfileList 1.06
299 TestNoKubernetes/serial/Stop 1.25
300 TestNoKubernetes/serial/StartNoArgs 10.32
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.46
303 TestStartStop/group/old-k8s-version/serial/FirstStart 139.11
304 TestStartStop/group/old-k8s-version/serial/DeployApp 9.55
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
306 TestStartStop/group/old-k8s-version/serial/Stop 11.29
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/old-k8s-version/serial/SecondStart 145.16
310 TestStartStop/group/no-preload/serial/FirstStart 52.86
311 TestStartStop/group/no-preload/serial/DeployApp 9.4
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
313 TestStartStop/group/no-preload/serial/Stop 11
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/no-preload/serial/SecondStart 294.08
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
319 TestStartStop/group/old-k8s-version/serial/Pause 3.56
321 TestStartStop/group/embed-certs/serial/FirstStart 49.64
322 TestStartStop/group/embed-certs/serial/DeployApp 10.41
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
324 TestStartStop/group/embed-certs/serial/Stop 11.02
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/embed-certs/serial/SecondStart 267.2
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
329 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
330 TestStartStop/group/no-preload/serial/Pause 2.94
332 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.18
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
336 TestStartStop/group/embed-certs/serial/Pause 2.82
338 TestStartStop/group/newest-cni/serial/FirstStart 36.69
339 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.5
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.26
341 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.94
342 TestStartStop/group/newest-cni/serial/DeployApp 0
343 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.3
344 TestStartStop/group/newest-cni/serial/Stop 11.12
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
346 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 291.73
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
348 TestStartStop/group/newest-cni/serial/SecondStart 26.26
349 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
352 TestStartStop/group/newest-cni/serial/Pause 2.81
353 TestNetworkPlugins/group/auto/Start 45.42
354 TestNetworkPlugins/group/auto/KubeletFlags 0.29
355 TestNetworkPlugins/group/auto/NetCatPod 10.29
356 TestNetworkPlugins/group/auto/DNS 0.18
357 TestNetworkPlugins/group/auto/Localhost 0.19
358 TestNetworkPlugins/group/auto/HairPin 0.17
359 TestNetworkPlugins/group/flannel/Start 51.99
360 TestNetworkPlugins/group/flannel/ControllerPod 6.01
361 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
362 TestNetworkPlugins/group/flannel/NetCatPod 9.27
363 TestNetworkPlugins/group/flannel/DNS 0.23
364 TestNetworkPlugins/group/flannel/Localhost 0.18
365 TestNetworkPlugins/group/flannel/HairPin 0.18
366 TestNetworkPlugins/group/calico/Start 70.43
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.29
369 TestNetworkPlugins/group/calico/NetCatPod 10.29
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
371 TestNetworkPlugins/group/calico/DNS 0.22
372 TestNetworkPlugins/group/calico/Localhost 0.16
373 TestNetworkPlugins/group/calico/HairPin 0.21
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
375 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
376 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.61
377 TestNetworkPlugins/group/custom-flannel/Start 65.51
378 TestNetworkPlugins/group/false/Start 86.01
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.3
381 TestNetworkPlugins/group/custom-flannel/DNS 0.34
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
384 TestNetworkPlugins/group/false/KubeletFlags 0.37
385 TestNetworkPlugins/group/false/NetCatPod 12.4
386 TestNetworkPlugins/group/kindnet/Start 78.89
387 TestNetworkPlugins/group/false/DNS 0.22
388 TestNetworkPlugins/group/false/Localhost 0.15
389 TestNetworkPlugins/group/false/HairPin 0.16
390 TestNetworkPlugins/group/kubenet/Start 79.41
391 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
392 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
393 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
394 TestNetworkPlugins/group/kindnet/DNS 0.18
395 TestNetworkPlugins/group/kindnet/Localhost 0.18
396 TestNetworkPlugins/group/kindnet/HairPin 0.18
397 TestNetworkPlugins/group/kubenet/KubeletFlags 0.41
398 TestNetworkPlugins/group/kubenet/NetCatPod 11.4
399 TestNetworkPlugins/group/enable-default-cni/Start 50.53
400 TestNetworkPlugins/group/kubenet/DNS 0.31
401 TestNetworkPlugins/group/kubenet/Localhost 0.31
402 TestNetworkPlugins/group/kubenet/HairPin 0.39
403 TestNetworkPlugins/group/bridge/Start 77.54
404 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
405 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.39
406 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
407 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
408 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
409 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
410 TestNetworkPlugins/group/bridge/NetCatPod 9.26
411 TestNetworkPlugins/group/bridge/DNS 0.18
412 TestNetworkPlugins/group/bridge/Localhost 0.14
413 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (7.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-593927 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-593927 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.411879128s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-593927
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-593927: exit status 85 (66.107213ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-593927 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |          |
	|         | -p download-only-593927        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:21
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:21.581651    7530 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:21.581795    7530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:21.581806    7530 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:21.581812    7530 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:21.582068    7530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
	W0906 18:29:21.582200    7530 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19576-2220/.minikube/config/config.json: open /home/jenkins/minikube-integration/19576-2220/.minikube/config/config.json: no such file or directory
	I0906 18:29:21.582712    7530 out.go:352] Setting JSON to true
	I0906 18:29:21.583501    7530 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":707,"bootTime":1725646655,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 18:29:21.583577    7530 start.go:139] virtualization:  
	I0906 18:29:21.587436    7530 out.go:97] [download-only-593927] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0906 18:29:21.587591    7530 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19576-2220/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 18:29:21.587632    7530 notify.go:220] Checking for updates...
	I0906 18:29:21.590381    7530 out.go:169] MINIKUBE_LOCATION=19576
	I0906 18:29:21.593297    7530 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:21.596196    7530 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig
	I0906 18:29:21.598647    7530 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube
	I0906 18:29:21.601184    7530 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0906 18:29:21.606230    7530 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 18:29:21.606500    7530 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:21.635379    7530 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0906 18:29:21.635483    7530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:21.984992    7530 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-06 18:29:21.975494826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:21.985105    7530 docker.go:318] overlay module found
	I0906 18:29:21.987810    7530 out.go:97] Using the docker driver based on user configuration
	I0906 18:29:21.987841    7530 start.go:297] selected driver: docker
	I0906 18:29:21.987849    7530 start.go:901] validating driver "docker" against <nil>
	I0906 18:29:21.987965    7530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:22.047271    7530 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-06 18:29:22.03760846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:22.047437    7530 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:22.047740    7530 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0906 18:29:22.047916    7530 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 18:29:22.051024    7530 out.go:169] Using Docker driver with root privileges
	I0906 18:29:22.053641    7530 cni.go:84] Creating CNI manager for ""
	I0906 18:29:22.053684    7530 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0906 18:29:22.053779    7530 start.go:340] cluster config:
	{Name:download-only-593927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-593927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:29:22.056689    7530 out.go:97] Starting "download-only-593927" primary control-plane node in "download-only-593927" cluster
	I0906 18:29:22.056728    7530 cache.go:121] Beginning downloading kic base image for docker with docker
	I0906 18:29:22.059398    7530 out.go:97] Pulling base image v0.0.45 ...
	I0906 18:29:22.059432    7530 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 18:29:22.059588    7530 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0906 18:29:22.075283    7530 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0906 18:29:22.075464    7530 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0906 18:29:22.075574    7530 image.go:148] Writing gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0906 18:29:22.120557    7530 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0906 18:29:22.120590    7530 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:22.120749    7530 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0906 18:29:22.123914    7530 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0906 18:29:22.123962    7530 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 18:29:22.224343    7530 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19576-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-593927 host does not exist
	  To start a cluster, run: "minikube start -p download-only-593927"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-593927
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (5.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-454037 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-454037 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.57270378s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (5.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-454037
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-454037: exit status 85 (58.781225ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-593927 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-593927        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-593927        | download-only-593927 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | -o=json --download-only        | download-only-454037 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-454037        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:29
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:29.392555    7726 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:29.392760    7726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:29.392786    7726 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:29.392803    7726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:29.393086    7726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
	I0906 18:29:29.393557    7726 out.go:352] Setting JSON to true
	I0906 18:29:29.394377    7726 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":715,"bootTime":1725646655,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 18:29:29.394466    7726 start.go:139] virtualization:  
	I0906 18:29:29.396219    7726 out.go:97] [download-only-454037] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0906 18:29:29.396465    7726 notify.go:220] Checking for updates...
	I0906 18:29:29.398766    7726 out.go:169] MINIKUBE_LOCATION=19576
	I0906 18:29:29.400150    7726 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:29.401452    7726 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig
	I0906 18:29:29.402556    7726 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube
	I0906 18:29:29.403657    7726 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0906 18:29:29.406008    7726 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 18:29:29.406320    7726 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:29.427395    7726 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0906 18:29:29.427507    7726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:29.493318    7726 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-06 18:29:29.482526381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:29.493462    7726 docker.go:318] overlay module found
	I0906 18:29:29.494728    7726 out.go:97] Using the docker driver based on user configuration
	I0906 18:29:29.494751    7726 start.go:297] selected driver: docker
	I0906 18:29:29.494757    7726 start.go:901] validating driver "docker" against <nil>
	I0906 18:29:29.494859    7726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:29.553738    7726 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-06 18:29:29.544487069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:29.553897    7726 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:29.554182    7726 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0906 18:29:29.554335    7726 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 18:29:29.555658    7726 out.go:169] Using Docker driver with root privileges
	I0906 18:29:29.556853    7726 cni.go:84] Creating CNI manager for ""
	I0906 18:29:29.556875    7726 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0906 18:29:29.556884    7726 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0906 18:29:29.556948    7726 start.go:340] cluster config:
	{Name:download-only-454037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-454037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:29:29.558293    7726 out.go:97] Starting "download-only-454037" primary control-plane node in "download-only-454037" cluster
	I0906 18:29:29.558317    7726 cache.go:121] Beginning downloading kic base image for docker with docker
	I0906 18:29:29.559553    7726 out.go:97] Pulling base image v0.0.45 ...
	I0906 18:29:29.559592    7726 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 18:29:29.559682    7726 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0906 18:29:29.574547    7726 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0906 18:29:29.574667    7726 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0906 18:29:29.574689    7726 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0906 18:29:29.574702    7726 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0906 18:29:29.574724    7726 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0906 18:29:29.622285    7726 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 18:29:29.622326    7726 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:29.622472    7726 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 18:29:29.623704    7726 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0906 18:29:29.623721    7726 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 18:29:29.705671    7726 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /home/jenkins/minikube-integration/19576-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0906 18:29:33.509439    7726 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 18:29:33.509549    7726 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19576-2220/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0906 18:29:34.251821    7726 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0906 18:29:34.252204    7726 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/download-only-454037/config.json ...
	I0906 18:29:34.252237    7726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/download-only-454037/config.json: {Name:mkc69ea1de562bc4cead626d4caefba80b0aca32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:29:34.252419    7726 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0906 18:29:34.252569    7726 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19576-2220/.minikube/cache/linux/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-454037 host does not exist
	  To start a cluster, run: "minikube start -p download-only-454037"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-454037
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-294650 --alsologtostderr --binary-mirror http://127.0.0.1:45513 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-294650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-294650
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (90.62s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-393182 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-393182 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m28.506523854s)
helpers_test.go:175: Cleaning up "offline-docker-393182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-393182
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-393182: (2.113833956s)
--- PASS: TestOffline (90.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-724441
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-724441: exit status 85 (78.284164ms)

                                                
                                                
-- stdout --
	* Profile "addons-724441" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-724441"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-724441
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-724441: exit status 85 (83.742605ms)

                                                
                                                
-- stdout --
	* Profile "addons-724441" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-724441"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (221.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-724441 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-724441 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m41.378136864s)
--- PASS: TestAddons/Setup (221.38s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 55.368725ms
addons_test.go:905: volcano-admission stabilized in 55.464503ms
addons_test.go:897: volcano-scheduler stabilized in 55.52728ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-5mpv9" [a6da5643-aafa-4cac-b8b4-824742aca4ea] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003642646s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-29xzd" [cf280140-6117-4aa2-8bb1-083f6731c837] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003723241s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-2bnnd" [2b19e7bf-dd71-4dad-aa39-de5560ba9f1c] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00408833s
addons_test.go:932: (dbg) Run:  kubectl --context addons-724441 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-724441 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-724441 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e8548295-fccc-4307-b1de-9191e5f6e3f5] Pending
helpers_test.go:344: "test-job-nginx-0" [e8548295-fccc-4307-b1de-9191e5f6e3f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [e8548295-fccc-4307-b1de-9191e5f6e3f5] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.006672383s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-724441 addons disable volcano --alsologtostderr -v=1: (10.60405716s)
--- PASS: TestAddons/serial/Volcano (42.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-724441 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-724441 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-724441 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-724441 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-724441 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [65b67a82-29b5-4f7f-8093-27e58a9d23b6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [65b67a82-29b5-4f7f-8093-27e58a9d23b6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003441781s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-724441 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-724441 addons disable ingress-dns --alsologtostderr -v=1: (1.063979435s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-724441 addons disable ingress --alsologtostderr -v=1: (7.695190743s)
--- PASS: TestAddons/parallel/Ingress (19.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6dprc" [906a9e40-d763-41ab-9ac4-90b46e7078d2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004399257s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-724441
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-724441: (5.812806173s)
--- PASS: TestAddons/parallel/InspektorGadget (10.82s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.85139ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-kfplw" [116ab06d-003d-4681-8809-50f22c5539e2] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004442016s
addons_test.go:417: (dbg) Run:  kubectl --context addons-724441 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.001773ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-724441 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-724441 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1d84abf9-e9be-43ab-b3ea-0ca0747a4370] Pending
helpers_test.go:344: "task-pv-pod" [1d84abf9-e9be-43ab-b3ea-0ca0747a4370] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1d84abf9-e9be-43ab-b3ea-0ca0747a4370] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003273981s
addons_test.go:590: (dbg) Run:  kubectl --context addons-724441 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-724441 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-724441 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-724441 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-724441 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-724441 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-724441 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a57c3d3c-4ead-4407-9585-26a05947e910] Pending
helpers_test.go:344: "task-pv-pod-restore" [a57c3d3c-4ead-4407-9585-26a05947e910] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a57c3d3c-4ead-4407-9585-26a05947e910] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.008712688s
addons_test.go:632: (dbg) Run:  kubectl --context addons-724441 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-724441 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-724441 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-724441 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.68579615s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-724441 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-hzx8d" [5361887f-1f3c-4ceb-89c4-3af586bb0f88] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-hzx8d" [5361887f-1f3c-4ceb-89c4-3af586bb0f88] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-hzx8d" [5361887f-1f3c-4ceb-89c4-3af586bb0f88] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00350532s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-724441 addons disable headlamp --alsologtostderr -v=1: (5.67363199s)
--- PASS: TestAddons/parallel/Headlamp (16.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-gq6gh" [91869a00-5181-4d9d-ade9-b6ee168a279a] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00335286s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-724441
--- PASS: TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-724441 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-724441 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-724441 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [218274f1-5a48-42d4-8048-07710e265499] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [218274f1-5a48-42d4-8048-07710e265499] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [218274f1-5a48-42d4-8048-07710e265499] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003034934s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-724441 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 ssh "cat /opt/local-path-provisioner/pvc-7aca5f61-b674-43cb-8d89-d088d3ea181f_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-724441 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-724441 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-724441 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.280502285s)
--- PASS: TestAddons/parallel/LocalPath (53.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-n2d4c" [0d57344b-eb6f-437a-9097-2b55afb1f7a1] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003676935s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-724441
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-47sd4" [70068c2f-1067-4b0e-a526-5542d016a541] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004370858s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-724441 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-724441 addons disable yakd --alsologtostderr -v=1: (5.682040811s)
--- PASS: TestAddons/parallel/Yakd (10.69s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.02s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-724441
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-724441: (5.75603679s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-724441
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-724441
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-724441
--- PASS: TestAddons/StoppedEnableDisable (6.02s)

                                                
                                    
x
+
TestCertOptions (42.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-030087 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0906 19:30:12.022624    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-030087 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (39.138953278s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-030087 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-030087 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-030087 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-030087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-030087
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-030087: (2.698624621s)
--- PASS: TestCertOptions (42.86s)

                                                
                                    
x
+
TestCertExpiration (246.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-789547 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-789547 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (39.638127542s)
E0906 19:32:46.063670    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-789547 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-789547 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.574126301s)
helpers_test.go:175: Cleaning up "cert-expiration-789547" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-789547
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-789547: (2.349630403s)
--- PASS: TestCertExpiration (246.56s)

                                                
                                    
x
+
TestDockerFlags (34.71s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-854662 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0906 19:27:46.063973    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:55.888510    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-854662 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.864376951s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-854662 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-854662 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-854662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-854662
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-854662: (2.25172087s)
--- PASS: TestDockerFlags (34.71s)

                                                
                                    
x
+
TestForceSystemdFlag (48.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-145466 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-145466 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (45.416327407s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-145466 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-145466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-145466
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-145466: (2.466465359s)
--- PASS: TestForceSystemdFlag (48.42s)

                                                
                                    
x
+
TestForceSystemdEnv (41.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-362075 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-362075 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.168980935s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-362075 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-362075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-362075
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-362075: (2.274408814s)
--- PASS: TestForceSystemdEnv (41.81s)

                                                
                                    
x
+
TestErrorSpam/setup (30.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-093947 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-093947 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-093947 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-093947 --driver=docker  --container-runtime=docker: (30.583349044s)
--- PASS: TestErrorSpam/setup (30.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 pause
--- PASS: TestErrorSpam/pause (1.37s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (2.21s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 stop: (1.995759818s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-093947 --log_dir /tmp/nospam-093947 stop
--- PASS: TestErrorSpam/stop (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19576-2220/.minikube/files/etc/test/nested/copy/7525/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422075 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-422075 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m15.552844909s)
--- PASS: TestFunctional/serial/StartWithProxy (75.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422075 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-422075 --alsologtostderr -v=8: (27.755912962s)
functional_test.go:663: soft start took 27.759819833s for "functional-422075" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-422075 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-422075 cache add registry.k8s.io/pause:3.1: (1.134527697s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-422075 cache add registry.k8s.io/pause:3.3: (1.20601173s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-422075 cache add registry.k8s.io/pause:latest: (1.005663507s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-422075 /tmp/TestFunctionalserialCacheCmdcacheadd_local2038054928/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 cache add minikube-local-cache-test:functional-422075
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 cache delete minikube-local-cache-test:functional-422075
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-422075
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422075 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.125642ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 kubectl -- --context functional-422075 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-422075 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422075 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-422075 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.673781828s)
functional_test.go:761: restart took 43.673895321s for "functional-422075" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-422075 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-422075 logs: (1.143983412s)
--- PASS: TestFunctional/serial/LogsCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 logs --file /tmp/TestFunctionalserialLogsFileCmd3041549567/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-422075 logs --file /tmp/TestFunctionalserialLogsFileCmd3041549567/001/logs.txt: (1.178239425s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-422075 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-422075
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-422075: exit status 115 (505.247232ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32399 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-422075 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-422075 delete -f testdata/invalidsvc.yaml: (1.209120321s)
--- PASS: TestFunctional/serial/InvalidService (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422075 config get cpus: exit status 14 (76.367856ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422075 config get cpus: exit status 14 (79.52441ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-422075 --alsologtostderr -v=1]
E0906 18:48:20.719977    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-422075 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49094: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422075 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-422075 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (184.213069ms)

                                                
                                                
-- stdout --
	* [functional-422075] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:48:19.638742   48783 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:48:19.638863   48783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:48:19.638877   48783 out.go:358] Setting ErrFile to fd 2...
	I0906 18:48:19.638883   48783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:48:19.639188   48783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
	I0906 18:48:19.639787   48783 out.go:352] Setting JSON to false
	I0906 18:48:19.640699   48783 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":1845,"bootTime":1725646655,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 18:48:19.640778   48783 start.go:139] virtualization:  
	I0906 18:48:19.643933   48783 out.go:177] * [functional-422075] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0906 18:48:19.647296   48783 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:48:19.647455   48783 notify.go:220] Checking for updates...
	I0906 18:48:19.652850   48783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:48:19.655460   48783 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig
	I0906 18:48:19.658234   48783 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube
	I0906 18:48:19.660769   48783 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 18:48:19.663355   48783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:48:19.670243   48783 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:48:19.670828   48783 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:48:19.700351   48783 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0906 18:48:19.700450   48783 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:48:19.764910   48783 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-06 18:48:19.755568837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:48:19.765017   48783 docker.go:318] overlay module found
	I0906 18:48:19.767903   48783 out.go:177] * Using the docker driver based on existing profile
	I0906 18:48:19.770435   48783 start.go:297] selected driver: docker
	I0906 18:48:19.770509   48783 start.go:901] validating driver "docker" against &{Name:functional-422075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-422075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:48:19.770720   48783 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:48:19.774396   48783 out.go:201] 
	W0906 18:48:19.777102   48783 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 18:48:19.779638   48783 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422075 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422075 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-422075 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (195.359742ms)

                                                
                                                
-- stdout --
	* [functional-422075] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:48:19.461738   48740 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:48:19.461956   48740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:48:19.461971   48740 out.go:358] Setting ErrFile to fd 2...
	I0906 18:48:19.461976   48740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:48:19.462427   48740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
	I0906 18:48:19.462850   48740 out.go:352] Setting JSON to false
	I0906 18:48:19.463940   48740 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":1845,"bootTime":1725646655,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0906 18:48:19.464039   48740 start.go:139] virtualization:  
	I0906 18:48:19.467492   48740 out.go:177] * [functional-422075] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0906 18:48:19.471022   48740 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:48:19.471090   48740 notify.go:220] Checking for updates...
	I0906 18:48:19.476530   48740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:48:19.479144   48740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig
	I0906 18:48:19.481710   48740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube
	I0906 18:48:19.485136   48740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 18:48:19.487761   48740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:48:19.490905   48740 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:48:19.491485   48740 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:48:19.516605   48740 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0906 18:48:19.516710   48740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:48:19.580410   48740 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-06 18:48:19.571071372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:48:19.580519   48740 docker.go:318] overlay module found
	I0906 18:48:19.583344   48740 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0906 18:48:19.586014   48740 start.go:297] selected driver: docker
	I0906 18:48:19.586032   48740 start.go:901] validating driver "docker" against &{Name:functional-422075 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-422075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:48:19.586136   48740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:48:19.589452   48740 out.go:201] 
	W0906 18:48:19.592048   48740 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 18:48:19.594601   48740 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 status
E0906 18:48:18.312316    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
E0906 18:48:18.796318    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-422075 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-422075 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-rzjt8" [6c4e595a-7775-42d2-8698-63d3000480f7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-rzjt8" [6c4e595a-7775-42d2-8698-63d3000480f7] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004102866s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30424
functional_test.go:1675: http://192.168.49.2:30424: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-rzjt8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30424
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ec203f7d-d14c-400d-ad50-d0e2503718da] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003571179s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-422075 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-422075 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-422075 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-422075 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b0c5c4e8-8f87-4b5e-ba6f-4763e63f5081] Pending
helpers_test.go:344: "sp-pod" [b0c5c4e8-8f87-4b5e-ba6f-4763e63f5081] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b0c5c4e8-8f87-4b5e-ba6f-4763e63f5081] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003929107s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-422075 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-422075 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-422075 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a471e367-dd0e-4a0e-a47e-54d22c1ed157] Pending
helpers_test.go:344: "sp-pod" [a471e367-dd0e-4a0e-a47e-54d22c1ed157] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a471e367-dd0e-4a0e-a47e-54d22c1ed157] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003810803s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-422075 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.77s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh -n functional-422075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 cp functional-422075:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3880659775/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh -n functional-422075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh -n functional-422075 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7525/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "sudo cat /etc/test/nested/copy/7525/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7525.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "sudo cat /etc/ssl/certs/7525.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7525.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "sudo cat /usr/share/ca-certificates/7525.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75252.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "sudo cat /etc/ssl/certs/75252.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75252.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "sudo cat /usr/share/ca-certificates/75252.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-422075 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422075 ssh "sudo systemctl is-active crio": exit status 1 (335.17096ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-422075 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-422075 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-422075 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45914: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-422075 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-422075 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-422075 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [db800747-e4ff-4da6-a7e4-0bcd1b41b884] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [db800747-e4ff-4da6-a7e4-0bcd1b41b884] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.005609842s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-422075 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.142.36 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-422075 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-422075 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-422075 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-nt69b" [ad02d5e8-a36a-4a21-a6b9-316471f27d65] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-nt69b" [ad02d5e8-a36a-4a21-a6b9-316471f27d65] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004752749s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "361.095648ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "62.979307ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "399.09753ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "64.842463ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 service list -o json
functional_test.go:1494: Took "634.351334ms" to run "out/minikube-linux-arm64 -p functional-422075 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-422075 /tmp/TestFunctionalparallelMountCmdany-port3225851267/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725648496484975013" to /tmp/TestFunctionalparallelMountCmdany-port3225851267/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725648496484975013" to /tmp/TestFunctionalparallelMountCmdany-port3225851267/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725648496484975013" to /tmp/TestFunctionalparallelMountCmdany-port3225851267/001/test-1725648496484975013
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422075 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (429.491286ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 18:48 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 18:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 18:48 test-1725648496484975013
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh cat /mount-9p/test-1725648496484975013
E0906 18:48:18.147680    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:18.154658    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:18.166001    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-422075 replace --force -f testdata/busybox-mount-test.yaml
E0906 18:48:18.189543    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:18.230902    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7f759047-24ec-45da-841c-d520f69404e6] Pending
E0906 18:48:18.474451    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [7f759047-24ec-45da-841c-d520f69404e6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0906 18:48:19.438456    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [7f759047-24ec-45da-841c-d520f69404e6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0906 18:48:23.281621    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [7f759047-24ec-45da-841c-d520f69404e6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003655466s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-422075 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422075 /tmp/TestFunctionalparallelMountCmdany-port3225851267/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30487
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30487
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-422075 /tmp/TestFunctionalparallelMountCmdspecific-port3028989207/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422075 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (523.238626ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422075 /tmp/TestFunctionalparallelMountCmdspecific-port3028989207/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422075 ssh "sudo umount -f /mount-9p": exit status 1 (317.604915ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-422075 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422075 /tmp/TestFunctionalparallelMountCmdspecific-port3028989207/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-422075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074238032/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-422075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074238032/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-422075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074238032/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422075 ssh "findmnt -T" /mount1: exit status 1 (935.141391ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "findmnt -T" /mount1
E0906 18:48:28.403827    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-422075 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074238032/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074238032/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422075 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3074238032/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-422075 version -o=json --components: (1.147002724s)
--- PASS: TestFunctional/parallel/Version/components (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-422075 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-422075
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-422075
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-422075 image ls --format short --alsologtostderr:
I0906 18:48:36.604830   51933 out.go:345] Setting OutFile to fd 1 ...
I0906 18:48:36.605029   51933 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:48:36.605039   51933 out.go:358] Setting ErrFile to fd 2...
I0906 18:48:36.605044   51933 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:48:36.605300   51933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
I0906 18:48:36.605936   51933 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:48:36.606107   51933 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:48:36.606649   51933 cli_runner.go:164] Run: docker container inspect functional-422075 --format={{.State.Status}}
I0906 18:48:36.625556   51933 ssh_runner.go:195] Run: systemctl --version
I0906 18:48:36.625610   51933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-422075
I0906 18:48:36.647458   51933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/functional-422075/id_rsa Username:docker}
I0906 18:48:36.742072   51933 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-422075 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-422075 | fc1f035fe3d14 | 30B    |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kicbase/echo-server               | functional-422075 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-422075 image ls --format table --alsologtostderr:
I0906 18:48:37.683729   52208 out.go:345] Setting OutFile to fd 1 ...
I0906 18:48:37.684111   52208 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:48:37.684126   52208 out.go:358] Setting ErrFile to fd 2...
I0906 18:48:37.684132   52208 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:48:37.684380   52208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
I0906 18:48:37.685021   52208 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:48:37.685145   52208 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:48:37.685707   52208 cli_runner.go:164] Run: docker container inspect functional-422075 --format={{.State.Status}}
I0906 18:48:37.703571   52208 ssh_runner.go:195] Run: systemctl --version
I0906 18:48:37.703638   52208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-422075
I0906 18:48:37.719530   52208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/functional-422075/id_rsa Username:docker}
I0906 18:48:37.805843   52208 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0906 18:48:38.645879    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-422075 image ls --format json --alsologtostderr:
[{"id":"fc1f035fe3d14f08ab5cd07e7b59845b42336309ffd446b66d70b144331aafd7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-422075"],"size":"30"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c4
56b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/
echo-server:functional-422075"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"s
ize":"3550000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-422075 image ls --format json --alsologtostderr:
I0906 18:48:37.453799   52138 out.go:345] Setting OutFile to fd 1 ...
I0906 18:48:37.453944   52138 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:48:37.453956   52138 out.go:358] Setting ErrFile to fd 2...
I0906 18:48:37.453962   52138 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:48:37.454192   52138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
I0906 18:48:37.454917   52138 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:48:37.455092   52138 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:48:37.455813   52138 cli_runner.go:164] Run: docker container inspect functional-422075 --format={{.State.Status}}
I0906 18:48:37.480076   52138 ssh_runner.go:195] Run: systemctl --version
I0906 18:48:37.480161   52138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-422075
I0906 18:48:37.504094   52138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/functional-422075/id_rsa Username:docker}
I0906 18:48:37.593975   52138 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-422075 image ls --format yaml --alsologtostderr:
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: fc1f035fe3d14f08ab5cd07e7b59845b42336309ffd446b66d70b144331aafd7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-422075
size: "30"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-422075
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-422075 image ls --format yaml --alsologtostderr:
I0906 18:48:37.198789   52063 out.go:345] Setting OutFile to fd 1 ...
I0906 18:48:37.199004   52063 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:48:37.199037   52063 out.go:358] Setting ErrFile to fd 2...
I0906 18:48:37.199058   52063 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:48:37.199434   52063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
I0906 18:48:37.200560   52063 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:48:37.201688   52063 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:48:37.202187   52063 cli_runner.go:164] Run: docker container inspect functional-422075 --format={{.State.Status}}
I0906 18:48:37.227302   52063 ssh_runner.go:195] Run: systemctl --version
I0906 18:48:37.227355   52063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-422075
I0906 18:48:37.256499   52063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/functional-422075/id_rsa Username:docker}
I0906 18:48:37.347485   52063 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422075 ssh pgrep buildkitd: exit status 1 (280.190453ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image build -t localhost/my-image:functional-422075 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-422075 image build -t localhost/my-image:functional-422075 testdata/build --alsologtostderr: (2.749107087s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-422075 image build -t localhost/my-image:functional-422075 testdata/build --alsologtostderr:
I0906 18:48:37.107577   52057 out.go:345] Setting OutFile to fd 1 ...
I0906 18:48:37.107928   52057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:48:37.107956   52057 out.go:358] Setting ErrFile to fd 2...
I0906 18:48:37.107978   52057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:48:37.108248   52057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
I0906 18:48:37.108980   52057 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:48:37.109933   52057 config.go:182] Loaded profile config "functional-422075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0906 18:48:37.110562   52057 cli_runner.go:164] Run: docker container inspect functional-422075 --format={{.State.Status}}
I0906 18:48:37.146686   52057 ssh_runner.go:195] Run: systemctl --version
I0906 18:48:37.146734   52057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-422075
I0906 18:48:37.194902   52057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/functional-422075/id_rsa Username:docker}
I0906 18:48:37.281977   52057 build_images.go:161] Building image from path: /tmp/build.2056444530.tar
I0906 18:48:37.282055   52057 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 18:48:37.302725   52057 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2056444530.tar
I0906 18:48:37.306531   52057 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2056444530.tar: stat -c "%s %y" /var/lib/minikube/build/build.2056444530.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2056444530.tar': No such file or directory
I0906 18:48:37.306561   52057 ssh_runner.go:362] scp /tmp/build.2056444530.tar --> /var/lib/minikube/build/build.2056444530.tar (3072 bytes)
I0906 18:48:37.334616   52057 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2056444530
I0906 18:48:37.345152   52057 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2056444530 -xf /var/lib/minikube/build/build.2056444530.tar
I0906 18:48:37.356036   52057 docker.go:360] Building image: /var/lib/minikube/build/build.2056444530
I0906 18:48:37.356152   52057 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-422075 /var/lib/minikube/build/build.2056444530
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e2bd21bf1f3703fc44224a8cbdf4be1b743f9bf41bbd486fa0171a8da7607981 done
#8 naming to localhost/my-image:functional-422075 done
#8 DONE 0.1s
I0906 18:48:39.778526   52057 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-422075 /var/lib/minikube/build/build.2056444530: (2.422329113s)
I0906 18:48:39.778599   52057 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2056444530
I0906 18:48:39.787812   52057 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2056444530.tar
I0906 18:48:39.796377   52057 build_images.go:217] Built localhost/my-image:functional-422075 from /tmp/build.2056444530.tar
I0906 18:48:39.796404   52057 build_images.go:133] succeeded building to: functional-422075
I0906 18:48:39.796409   52057 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-422075
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image load --daemon kicbase/echo-server:functional-422075 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image load --daemon kicbase/echo-server:functional-422075 --alsologtostderr
2024/09/06 18:48:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-422075 docker-env) && out/minikube-linux-arm64 status -p functional-422075"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-422075 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-422075
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image load --daemon kicbase/echo-server:functional-422075 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image save kicbase/echo-server:functional-422075 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image rm kicbase/echo-server:functional-422075 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-422075
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-422075 image save --daemon kicbase/echo-server:functional-422075 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-422075
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-422075
--- PASS: TestFunctional/delete_echo-server_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-422075
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-422075
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (125.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-667788 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0906 18:48:59.127299    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:49:40.089530    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-667788 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m4.557315522s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (125.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (45.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-667788 -- rollout status deployment/busybox: (5.281140026s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3 10.244.2.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3 10.244.2.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3 10.244.2.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3 10.244.2.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3 10.244.2.2 10.244.0.4'\n\n-- /stdout --"
E0906 18:51:02.011607    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3 10.244.2.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.1.3 10.244.2.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-f6r5d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-hqw5h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-zkbph -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-f6r5d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-hqw5h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-zkbph -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-f6r5d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-hqw5h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-zkbph -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (45.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-f6r5d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-f6r5d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-hqw5h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-hqw5h -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-zkbph -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-667788 -- exec busybox-7dff88458-zkbph -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-667788 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-667788 -v=7 --alsologtostderr: (24.84853552s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr: (1.037057661s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-667788 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-667788 status --output json -v=7 --alsologtostderr: (1.096541253s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp testdata/cp-test.txt ha-667788:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3249582795/001/cp-test_ha-667788.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788:/home/docker/cp-test.txt ha-667788-m02:/home/docker/cp-test_ha-667788_ha-667788-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m02 "sudo cat /home/docker/cp-test_ha-667788_ha-667788-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788:/home/docker/cp-test.txt ha-667788-m03:/home/docker/cp-test_ha-667788_ha-667788-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m03 "sudo cat /home/docker/cp-test_ha-667788_ha-667788-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788:/home/docker/cp-test.txt ha-667788-m04:/home/docker/cp-test_ha-667788_ha-667788-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m04 "sudo cat /home/docker/cp-test_ha-667788_ha-667788-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp testdata/cp-test.txt ha-667788-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3249582795/001/cp-test_ha-667788-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m02:/home/docker/cp-test.txt ha-667788:/home/docker/cp-test_ha-667788-m02_ha-667788.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788 "sudo cat /home/docker/cp-test_ha-667788-m02_ha-667788.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m02:/home/docker/cp-test.txt ha-667788-m03:/home/docker/cp-test_ha-667788-m02_ha-667788-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m03 "sudo cat /home/docker/cp-test_ha-667788-m02_ha-667788-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m02:/home/docker/cp-test.txt ha-667788-m04:/home/docker/cp-test_ha-667788-m02_ha-667788-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m04 "sudo cat /home/docker/cp-test_ha-667788-m02_ha-667788-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp testdata/cp-test.txt ha-667788-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3249582795/001/cp-test_ha-667788-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m03:/home/docker/cp-test.txt ha-667788:/home/docker/cp-test_ha-667788-m03_ha-667788.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788 "sudo cat /home/docker/cp-test_ha-667788-m03_ha-667788.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m03:/home/docker/cp-test.txt ha-667788-m02:/home/docker/cp-test_ha-667788-m03_ha-667788-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m02 "sudo cat /home/docker/cp-test_ha-667788-m03_ha-667788-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m03:/home/docker/cp-test.txt ha-667788-m04:/home/docker/cp-test_ha-667788-m03_ha-667788-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m04 "sudo cat /home/docker/cp-test_ha-667788-m03_ha-667788-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp testdata/cp-test.txt ha-667788-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3249582795/001/cp-test_ha-667788-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m04:/home/docker/cp-test.txt ha-667788:/home/docker/cp-test_ha-667788-m04_ha-667788.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788 "sudo cat /home/docker/cp-test_ha-667788-m04_ha-667788.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m04:/home/docker/cp-test.txt ha-667788-m02:/home/docker/cp-test_ha-667788-m04_ha-667788-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m02 "sudo cat /home/docker/cp-test_ha-667788-m04_ha-667788-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 cp ha-667788-m04:/home/docker/cp-test.txt ha-667788-m03:/home/docker/cp-test_ha-667788-m04_ha-667788-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 ssh -n ha-667788-m03 "sudo cat /home/docker/cp-test_ha-667788-m04_ha-667788-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-667788 node stop m02 -v=7 --alsologtostderr: (10.888789448s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr: exit status 7 (786.570256ms)

                                                
                                                
-- stdout --
	ha-667788
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-667788-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-667788-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-667788-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:52:32.400057   74838 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:52:32.400239   74838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:52:32.400249   74838 out.go:358] Setting ErrFile to fd 2...
	I0906 18:52:32.400255   74838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:52:32.400599   74838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
	I0906 18:52:32.400840   74838 out.go:352] Setting JSON to false
	I0906 18:52:32.400881   74838 mustload.go:65] Loading cluster: ha-667788
	I0906 18:52:32.401501   74838 notify.go:220] Checking for updates...
	I0906 18:52:32.402787   74838 config.go:182] Loaded profile config "ha-667788": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:52:32.402808   74838 status.go:255] checking status of ha-667788 ...
	I0906 18:52:32.403709   74838 cli_runner.go:164] Run: docker container inspect ha-667788 --format={{.State.Status}}
	I0906 18:52:32.440110   74838 status.go:330] ha-667788 host status = "Running" (err=<nil>)
	I0906 18:52:32.440136   74838 host.go:66] Checking if "ha-667788" exists ...
	I0906 18:52:32.440439   74838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-667788
	I0906 18:52:32.461199   74838 host.go:66] Checking if "ha-667788" exists ...
	I0906 18:52:32.461774   74838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:52:32.461862   74838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-667788
	I0906 18:52:32.483865   74838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/ha-667788/id_rsa Username:docker}
	I0906 18:52:32.587152   74838 ssh_runner.go:195] Run: systemctl --version
	I0906 18:52:32.591597   74838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:52:32.603900   74838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:52:32.684002   74838 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-06 18:52:32.674446545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:52:32.684588   74838 kubeconfig.go:125] found "ha-667788" server: "https://192.168.49.254:8443"
	I0906 18:52:32.684622   74838 api_server.go:166] Checking apiserver status ...
	I0906 18:52:32.684673   74838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:52:32.697071   74838 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2339/cgroup
	I0906 18:52:32.707348   74838 api_server.go:182] apiserver freezer: "2:freezer:/docker/f059c2ae0e9f2699512d97db60a559d54c40362ec46184678a37ace579206ee4/kubepods/burstable/pod9f55159ec378648febd34be51d5c1f1e/8aa0ef9dcbe62ead17724549d569446ad56bb9457510f993c5fb830ce1873c5f"
	I0906 18:52:32.707436   74838 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f059c2ae0e9f2699512d97db60a559d54c40362ec46184678a37ace579206ee4/kubepods/burstable/pod9f55159ec378648febd34be51d5c1f1e/8aa0ef9dcbe62ead17724549d569446ad56bb9457510f993c5fb830ce1873c5f/freezer.state
	I0906 18:52:32.716717   74838 api_server.go:204] freezer state: "THAWED"
	I0906 18:52:32.716751   74838 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0906 18:52:32.724559   74838 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0906 18:52:32.724585   74838 status.go:422] ha-667788 apiserver status = Running (err=<nil>)
	I0906 18:52:32.724596   74838 status.go:257] ha-667788 status: &{Name:ha-667788 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:52:32.724614   74838 status.go:255] checking status of ha-667788-m02 ...
	I0906 18:52:32.724941   74838 cli_runner.go:164] Run: docker container inspect ha-667788-m02 --format={{.State.Status}}
	I0906 18:52:32.742169   74838 status.go:330] ha-667788-m02 host status = "Stopped" (err=<nil>)
	I0906 18:52:32.742192   74838 status.go:343] host is not running, skipping remaining checks
	I0906 18:52:32.742201   74838 status.go:257] ha-667788-m02 status: &{Name:ha-667788-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:52:32.742221   74838 status.go:255] checking status of ha-667788-m03 ...
	I0906 18:52:32.742548   74838 cli_runner.go:164] Run: docker container inspect ha-667788-m03 --format={{.State.Status}}
	I0906 18:52:32.760113   74838 status.go:330] ha-667788-m03 host status = "Running" (err=<nil>)
	I0906 18:52:32.760140   74838 host.go:66] Checking if "ha-667788-m03" exists ...
	I0906 18:52:32.760457   74838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-667788-m03
	I0906 18:52:32.777774   74838 host.go:66] Checking if "ha-667788-m03" exists ...
	I0906 18:52:32.778117   74838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:52:32.778160   74838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-667788-m03
	I0906 18:52:32.794725   74838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/ha-667788-m03/id_rsa Username:docker}
	I0906 18:52:32.887074   74838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:52:32.902348   74838 kubeconfig.go:125] found "ha-667788" server: "https://192.168.49.254:8443"
	I0906 18:52:32.902425   74838 api_server.go:166] Checking apiserver status ...
	I0906 18:52:32.902498   74838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:52:32.917724   74838 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2182/cgroup
	I0906 18:52:32.930278   74838 api_server.go:182] apiserver freezer: "2:freezer:/docker/56ad987a44e4f66e23dc2ab5b96f44fb5f2e12f28acaab4db4f3a82c62a2d7bb/kubepods/burstable/pod91103cb366674b53ea79a2ce63ba84ba/7c2ad4d3898288474b2d55934e00162fdb2afcada7cfc91d288c0bff0995ba85"
	I0906 18:52:32.930372   74838 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/56ad987a44e4f66e23dc2ab5b96f44fb5f2e12f28acaab4db4f3a82c62a2d7bb/kubepods/burstable/pod91103cb366674b53ea79a2ce63ba84ba/7c2ad4d3898288474b2d55934e00162fdb2afcada7cfc91d288c0bff0995ba85/freezer.state
	I0906 18:52:32.941922   74838 api_server.go:204] freezer state: "THAWED"
	I0906 18:52:32.941947   74838 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0906 18:52:32.958119   74838 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0906 18:52:32.958210   74838 status.go:422] ha-667788-m03 apiserver status = Running (err=<nil>)
	I0906 18:52:32.958238   74838 status.go:257] ha-667788-m03 status: &{Name:ha-667788-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:52:32.958285   74838 status.go:255] checking status of ha-667788-m04 ...
	I0906 18:52:32.958688   74838 cli_runner.go:164] Run: docker container inspect ha-667788-m04 --format={{.State.Status}}
	I0906 18:52:32.978632   74838 status.go:330] ha-667788-m04 host status = "Running" (err=<nil>)
	I0906 18:52:32.978658   74838 host.go:66] Checking if "ha-667788-m04" exists ...
	I0906 18:52:32.978972   74838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-667788-m04
	I0906 18:52:32.996284   74838 host.go:66] Checking if "ha-667788-m04" exists ...
	I0906 18:52:32.996680   74838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:52:32.996730   74838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-667788-m04
	I0906 18:52:33.015688   74838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/ha-667788-m04/id_rsa Username:docker}
	I0906 18:52:33.106857   74838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:52:33.120635   74838 status.go:257] ha-667788-m04 status: &{Name:ha-667788-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (70.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 node start m02 -v=7 --alsologtostderr
E0906 18:52:46.065252    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:46.071887    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:46.083258    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:46.104659    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:46.146062    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:46.227516    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:46.388882    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:46.710566    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:47.352564    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:48.635191    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:51.197540    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:52:56.319873    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:53:06.561619    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:53:18.146231    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:53:27.043814    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-667788 node start m02 -v=7 --alsologtostderr: (1m9.095708395s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr: (1.060965972s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (70.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (204.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-667788 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-667788 -v=7 --alsologtostderr
E0906 18:53:45.852957    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:54:08.005722    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-667788 -v=7 --alsologtostderr: (34.219196289s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-667788 --wait=true -v=7 --alsologtostderr
E0906 18:55:29.927574    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-667788 --wait=true -v=7 --alsologtostderr: (2m50.325899128s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-667788
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (204.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-667788 node delete m03 -v=7 --alsologtostderr: (10.311168363s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 stop -v=7 --alsologtostderr
E0906 18:57:46.062931    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-667788 stop -v=7 --alsologtostderr: (32.954147298s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr: exit status 7 (109.067822ms)

                                                
                                                
-- stdout --
	ha-667788
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-667788-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-667788-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:57:54.288221  101926 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:57:54.288605  101926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:57:54.288613  101926 out.go:358] Setting ErrFile to fd 2...
	I0906 18:57:54.288618  101926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:57:54.288855  101926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
	I0906 18:57:54.289030  101926 out.go:352] Setting JSON to false
	I0906 18:57:54.289056  101926 mustload.go:65] Loading cluster: ha-667788
	I0906 18:57:54.289523  101926 notify.go:220] Checking for updates...
	I0906 18:57:54.289607  101926 config.go:182] Loaded profile config "ha-667788": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 18:57:54.289630  101926 status.go:255] checking status of ha-667788 ...
	I0906 18:57:54.290464  101926 cli_runner.go:164] Run: docker container inspect ha-667788 --format={{.State.Status}}
	I0906 18:57:54.307108  101926 status.go:330] ha-667788 host status = "Stopped" (err=<nil>)
	I0906 18:57:54.307132  101926 status.go:343] host is not running, skipping remaining checks
	I0906 18:57:54.307140  101926 status.go:257] ha-667788 status: &{Name:ha-667788 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:57:54.307165  101926 status.go:255] checking status of ha-667788-m02 ...
	I0906 18:57:54.307468  101926 cli_runner.go:164] Run: docker container inspect ha-667788-m02 --format={{.State.Status}}
	I0906 18:57:54.330980  101926 status.go:330] ha-667788-m02 host status = "Stopped" (err=<nil>)
	I0906 18:57:54.331004  101926 status.go:343] host is not running, skipping remaining checks
	I0906 18:57:54.331011  101926 status.go:257] ha-667788-m02 status: &{Name:ha-667788-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:57:54.331030  101926 status.go:255] checking status of ha-667788-m04 ...
	I0906 18:57:54.331332  101926 cli_runner.go:164] Run: docker container inspect ha-667788-m04 --format={{.State.Status}}
	I0906 18:57:54.347509  101926 status.go:330] ha-667788-m04 host status = "Stopped" (err=<nil>)
	I0906 18:57:54.347529  101926 status.go:343] host is not running, skipping remaining checks
	I0906 18:57:54.347537  101926 status.go:257] ha-667788-m04 status: &{Name:ha-667788-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (157.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-667788 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0906 18:58:13.769207    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:58:18.145709    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-667788 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m36.053709487s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (157.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-667788 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-667788 --control-plane -v=7 --alsologtostderr: (44.116876219s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-667788 status -v=7 --alsologtostderr: (1.017313618s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-758942 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-758942 --driver=docker  --container-runtime=docker: (34.801215788s)
--- PASS: TestImageBuild/serial/Setup (34.80s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-758942
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-758942: (1.980194127s)
--- PASS: TestImageBuild/serial/NormalBuild (1.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-758942
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-758942: (1.031349233s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-758942
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-758942
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-047577 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0906 19:02:46.062980    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-047577 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (44.618650069s)
--- PASS: TestJSONOutput/start/Command (44.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-047577 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-047577 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-047577 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-047577 --output=json --user=testUser: (5.988335948s)
--- PASS: TestJSONOutput/stop/Command (5.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-834093 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-834093 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (73.922645ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"61c6c1cc-8144-4c93-aa1a-9485a52490e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-834093] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"df959f69-1462-4e48-a708-f2c3f52e8297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19576"}}
	{"specversion":"1.0","id":"eb908980-f7d7-45c9-b139-be329d629401","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cfdd66ca-34fe-4b3f-a9ac-802caf442e49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig"}}
	{"specversion":"1.0","id":"769fe386-892b-42e6-92fa-40944e82d0bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube"}}
	{"specversion":"1.0","id":"e41a738f-f449-4df1-abfe-8bb8c2c561b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ad9819c5-8b1e-4ffe-9476-6cf1a883a5c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4260c31e-70dc-4100-83b7-c2d67629dbf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-834093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-834093
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.77s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-258697 --network=
E0906 19:03:18.145867    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-258697 --network=: (30.618710354s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-258697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-258697
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-258697: (2.119910778s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.77s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-915023 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-915023 --network=bridge: (31.527268697s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-915023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-915023
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-915023: (2.047127518s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.60s)

                                                
                                    
x
+
TestKicExistingNetwork (30.64s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-381459 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-381459 --network=existing-network: (28.509739426s)
helpers_test.go:175: Cleaning up "existing-network-381459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-381459
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-381459: (1.970005261s)
--- PASS: TestKicExistingNetwork (30.64s)

                                                
                                    
x
+
TestKicCustomSubnet (33.37s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-015563 --subnet=192.168.60.0/24
E0906 19:04:41.214353    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-015563 --subnet=192.168.60.0/24: (31.179812396s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-015563 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-015563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-015563
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-015563: (2.167441698s)
--- PASS: TestKicCustomSubnet (33.37s)

                                                
                                    
x
+
TestKicStaticIP (34.48s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-878062 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-878062 --static-ip=192.168.200.200: (32.211450404s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-878062 ip
helpers_test.go:175: Cleaning up "static-ip-878062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-878062
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-878062: (2.075807273s)
--- PASS: TestKicStaticIP (34.48s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (66.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-569797 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-569797 --driver=docker  --container-runtime=docker: (29.442700921s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-572455 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-572455 --driver=docker  --container-runtime=docker: (31.499473456s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-569797
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-572455
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-572455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-572455
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-572455: (2.066815691s)
helpers_test.go:175: Cleaning up "first-569797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-569797
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-569797: (2.102586107s)
--- PASS: TestMinikubeProfile (66.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-452107 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-452107 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.557734182s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-452107 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-464932 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-464932 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.679195707s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-464932 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-452107 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-452107 --alsologtostderr -v=5: (1.487791628s)
--- PASS: TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-464932 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-464932
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-464932: (1.208840922s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-464932
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-464932: (7.891753299s)
--- PASS: TestMountStart/serial/RestartStopped (8.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-464932 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (84.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-450912 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0906 19:07:46.063124    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:08:18.145430    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-450912 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.626667652s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (84.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (54.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-450912 -- rollout status deployment/busybox: (3.512120469s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0906 19:09:09.130448    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- exec busybox-7dff88458-frs7d -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- exec busybox-7dff88458-lqlrq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- exec busybox-7dff88458-frs7d -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- exec busybox-7dff88458-lqlrq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- exec busybox-7dff88458-frs7d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- exec busybox-7dff88458-lqlrq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (54.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- exec busybox-7dff88458-frs7d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- exec busybox-7dff88458-frs7d -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- exec busybox-7dff88458-lqlrq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-450912 -- exec busybox-7dff88458-lqlrq -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-450912 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-450912 -v 3 --alsologtostderr: (17.601774176s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.54s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-450912 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp testdata/cp-test.txt multinode-450912:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp multinode-450912:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile564955394/001/cp-test_multinode-450912.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp multinode-450912:/home/docker/cp-test.txt multinode-450912-m02:/home/docker/cp-test_multinode-450912_multinode-450912-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m02 "sudo cat /home/docker/cp-test_multinode-450912_multinode-450912-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp multinode-450912:/home/docker/cp-test.txt multinode-450912-m03:/home/docker/cp-test_multinode-450912_multinode-450912-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m03 "sudo cat /home/docker/cp-test_multinode-450912_multinode-450912-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp testdata/cp-test.txt multinode-450912-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp multinode-450912-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile564955394/001/cp-test_multinode-450912-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp multinode-450912-m02:/home/docker/cp-test.txt multinode-450912:/home/docker/cp-test_multinode-450912-m02_multinode-450912.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912 "sudo cat /home/docker/cp-test_multinode-450912-m02_multinode-450912.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp multinode-450912-m02:/home/docker/cp-test.txt multinode-450912-m03:/home/docker/cp-test_multinode-450912-m02_multinode-450912-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m03 "sudo cat /home/docker/cp-test_multinode-450912-m02_multinode-450912-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp testdata/cp-test.txt multinode-450912-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp multinode-450912-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile564955394/001/cp-test_multinode-450912-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp multinode-450912-m03:/home/docker/cp-test.txt multinode-450912:/home/docker/cp-test_multinode-450912-m03_multinode-450912.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912 "sudo cat /home/docker/cp-test_multinode-450912-m03_multinode-450912.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 cp multinode-450912-m03:/home/docker/cp-test.txt multinode-450912-m02:/home/docker/cp-test_multinode-450912-m03_multinode-450912-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 ssh -n multinode-450912-m02 "sudo cat /home/docker/cp-test_multinode-450912-m03_multinode-450912-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-450912 node stop m03: (1.210400704s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-450912 status: exit status 7 (495.124673ms)

                                                
                                                
-- stdout --
	multinode-450912
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-450912-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-450912-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-450912 status --alsologtostderr: exit status 7 (607.839033ms)

                                                
                                                
-- stdout --
	multinode-450912
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-450912-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-450912-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:10:14.823694  177503 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:10:14.823812  177503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:10:14.823823  177503 out.go:358] Setting ErrFile to fd 2...
	I0906 19:10:14.823830  177503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:10:14.824060  177503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
	I0906 19:10:14.824238  177503 out.go:352] Setting JSON to false
	I0906 19:10:14.824274  177503 mustload.go:65] Loading cluster: multinode-450912
	I0906 19:10:14.824312  177503 notify.go:220] Checking for updates...
	I0906 19:10:14.824712  177503 config.go:182] Loaded profile config "multinode-450912": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 19:10:14.824727  177503 status.go:255] checking status of multinode-450912 ...
	I0906 19:10:14.825573  177503 cli_runner.go:164] Run: docker container inspect multinode-450912 --format={{.State.Status}}
	I0906 19:10:14.848855  177503 status.go:330] multinode-450912 host status = "Running" (err=<nil>)
	I0906 19:10:14.848880  177503 host.go:66] Checking if "multinode-450912" exists ...
	I0906 19:10:14.849215  177503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-450912
	I0906 19:10:14.870903  177503 host.go:66] Checking if "multinode-450912" exists ...
	I0906 19:10:14.871207  177503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:10:14.871256  177503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-450912
	I0906 19:10:14.892389  177503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/multinode-450912/id_rsa Username:docker}
	I0906 19:10:14.995320  177503 ssh_runner.go:195] Run: systemctl --version
	I0906 19:10:14.999938  177503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:10:15.012643  177503 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 19:10:15.156599  177503 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-06 19:10:15.130268507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 19:10:15.157231  177503 kubeconfig.go:125] found "multinode-450912" server: "https://192.168.67.2:8443"
	I0906 19:10:15.157262  177503 api_server.go:166] Checking apiserver status ...
	I0906 19:10:15.157313  177503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:10:15.170911  177503 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2209/cgroup
	I0906 19:10:15.181464  177503 api_server.go:182] apiserver freezer: "2:freezer:/docker/f2e9968e1adbee68da4e8bc212403f1e40117b82708a62325bbd2ed97e66b2f2/kubepods/burstable/podcd70ff978aaaab1f07bf8f1260cb9900/aa9e2977dabb74cf7cc82a2a419e7d4a3f513a8b662621228f27c5027ab7072c"
	I0906 19:10:15.181548  177503 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f2e9968e1adbee68da4e8bc212403f1e40117b82708a62325bbd2ed97e66b2f2/kubepods/burstable/podcd70ff978aaaab1f07bf8f1260cb9900/aa9e2977dabb74cf7cc82a2a419e7d4a3f513a8b662621228f27c5027ab7072c/freezer.state
	I0906 19:10:15.191619  177503 api_server.go:204] freezer state: "THAWED"
	I0906 19:10:15.191659  177503 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 19:10:15.202912  177503 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0906 19:10:15.202942  177503 status.go:422] multinode-450912 apiserver status = Running (err=<nil>)
	I0906 19:10:15.202953  177503 status.go:257] multinode-450912 status: &{Name:multinode-450912 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:10:15.202970  177503 status.go:255] checking status of multinode-450912-m02 ...
	I0906 19:10:15.203299  177503 cli_runner.go:164] Run: docker container inspect multinode-450912-m02 --format={{.State.Status}}
	I0906 19:10:15.220967  177503 status.go:330] multinode-450912-m02 host status = "Running" (err=<nil>)
	I0906 19:10:15.220988  177503 host.go:66] Checking if "multinode-450912-m02" exists ...
	I0906 19:10:15.221316  177503 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-450912-m02
	I0906 19:10:15.238135  177503 host.go:66] Checking if "multinode-450912-m02" exists ...
	I0906 19:10:15.238490  177503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:10:15.238540  177503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-450912-m02
	I0906 19:10:15.255491  177503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19576-2220/.minikube/machines/multinode-450912-m02/id_rsa Username:docker}
	I0906 19:10:15.342479  177503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:10:15.354029  177503 status.go:257] multinode-450912-m02 status: &{Name:multinode-450912-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:10:15.354063  177503 status.go:255] checking status of multinode-450912-m03 ...
	I0906 19:10:15.354443  177503 cli_runner.go:164] Run: docker container inspect multinode-450912-m03 --format={{.State.Status}}
	I0906 19:10:15.375286  177503 status.go:330] multinode-450912-m03 host status = "Stopped" (err=<nil>)
	I0906 19:10:15.375309  177503 status.go:343] host is not running, skipping remaining checks
	I0906 19:10:15.375317  177503 status.go:257] multinode-450912-m03 status: &{Name:multinode-450912-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-450912 node start m03 -v=7 --alsologtostderr: (10.298521009s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (104.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-450912
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-450912
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-450912: (22.655308086s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-450912 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-450912 --wait=true -v=8 --alsologtostderr: (1m22.121744385s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-450912
--- PASS: TestMultiNode/serial/RestartKeepsNodes (104.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-450912 node delete m03: (4.914832437s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-450912 stop: (21.534784617s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-450912 status: exit status 7 (86.756545ms)

                                                
                                                
-- stdout --
	multinode-450912
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-450912-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-450912 status --alsologtostderr: exit status 7 (83.170213ms)

                                                
                                                
-- stdout --
	multinode-450912
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-450912-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:12:38.615296  190955 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:12:38.615440  190955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:12:38.615450  190955 out.go:358] Setting ErrFile to fd 2...
	I0906 19:12:38.615456  190955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:12:38.615713  190955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2220/.minikube/bin
	I0906 19:12:38.615889  190955 out.go:352] Setting JSON to false
	I0906 19:12:38.615929  190955 mustload.go:65] Loading cluster: multinode-450912
	I0906 19:12:38.616040  190955 notify.go:220] Checking for updates...
	I0906 19:12:38.616344  190955 config.go:182] Loaded profile config "multinode-450912": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0906 19:12:38.616362  190955 status.go:255] checking status of multinode-450912 ...
	I0906 19:12:38.617130  190955 cli_runner.go:164] Run: docker container inspect multinode-450912 --format={{.State.Status}}
	I0906 19:12:38.634415  190955 status.go:330] multinode-450912 host status = "Stopped" (err=<nil>)
	I0906 19:12:38.634439  190955 status.go:343] host is not running, skipping remaining checks
	I0906 19:12:38.634447  190955 status.go:257] multinode-450912 status: &{Name:multinode-450912 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:12:38.634483  190955 status.go:255] checking status of multinode-450912-m02 ...
	I0906 19:12:38.634783  190955 cli_runner.go:164] Run: docker container inspect multinode-450912-m02 --format={{.State.Status}}
	I0906 19:12:38.654468  190955 status.go:330] multinode-450912-m02 host status = "Stopped" (err=<nil>)
	I0906 19:12:38.654488  190955 status.go:343] host is not running, skipping remaining checks
	I0906 19:12:38.654495  190955 status.go:257] multinode-450912-m02 status: &{Name:multinode-450912-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-450912 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0906 19:12:46.063999    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:13:18.145708    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-450912 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (58.227114858s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-450912 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-450912
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-450912-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-450912-m02 --driver=docker  --container-runtime=docker: exit status 14 (85.252797ms)

                                                
                                                
-- stdout --
	* [multinode-450912-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-450912-m02' is duplicated with machine name 'multinode-450912-m02' in profile 'multinode-450912'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-450912-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-450912-m03 --driver=docker  --container-runtime=docker: (32.795389308s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-450912
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-450912: exit status 80 (578.008089ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-450912 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-450912-m03 already exists in multinode-450912-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-450912-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-450912-m03: (2.114192903s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.62s)

                                                
                                    
x
+
TestPreload (144.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-818378 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-818378 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m42.861451929s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-818378 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-818378 image pull gcr.io/k8s-minikube/busybox: (2.537920616s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-818378
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-818378: (10.767797579s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-818378 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-818378 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (26.188538969s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-818378 image list
helpers_test.go:175: Cleaning up "test-preload-818378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-818378
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-818378: (2.301446057s)
--- PASS: TestPreload (144.92s)

                                                
                                    
x
+
TestScheduledStopUnix (105.24s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-090954 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-090954 --memory=2048 --driver=docker  --container-runtime=docker: (32.050042809s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-090954 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-090954 -n scheduled-stop-090954
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-090954 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-090954 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-090954 -n scheduled-stop-090954
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-090954
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-090954 --schedule 15s
E0906 19:17:46.062908    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0906 19:18:18.146369    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-090954
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-090954: exit status 7 (65.821473ms)

                                                
                                                
-- stdout --
	scheduled-stop-090954
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-090954 -n scheduled-stop-090954
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-090954 -n scheduled-stop-090954: exit status 7 (59.939153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-090954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-090954
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-090954: (1.669317808s)
--- PASS: TestScheduledStopUnix (105.24s)

                                                
                                    
x
+
TestSkaffold (118.71s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1879954556 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-923947 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-923947 --memory=2600 --driver=docker  --container-runtime=docker: (31.85284635s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1879954556 run --minikube-profile skaffold-923947 --kube-context skaffold-923947 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1879954556 run --minikube-profile skaffold-923947 --kube-context skaffold-923947 --status-check=true --port-forward=false --interactive=false: (1m11.267936828s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-97b9454b7-67n5n" [ab633080-feb0-4626-94f6-3874fba660b6] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004400615s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7ffcdd64c7-psk66" [073d2c13-e7a7-4524-9cd5-4c759a02428b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.014954217s
helpers_test.go:175: Cleaning up "skaffold-923947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-923947
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-923947: (2.964780989s)
--- PASS: TestSkaffold (118.71s)

                                                
                                    
x
+
TestInsufficientStorage (11.42s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-583796 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-583796 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.113400139s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"04b5a56b-6134-45df-b725-7d6caead5620","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-583796] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"077ac0d3-5236-4883-9af3-3c09d23bce34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19576"}}
	{"specversion":"1.0","id":"049b77bb-33d1-40e3-9d9d-9fd19ada057c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4adf331b-fb22-47eb-9054-fff2e879e711","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig"}}
	{"specversion":"1.0","id":"8d0a64a9-e49b-4d59-9f90-350769bef3c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube"}}
	{"specversion":"1.0","id":"826b3486-d577-4138-91fa-3f4014732f1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"464fcb26-dd5c-45cc-9c6f-33c143dbd118","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f56fd758-ab1c-4dfd-92f9-2b2a424c834f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e166b9d5-fb87-4aa2-b2d8-73476ed2426a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"41ba764f-42c7-400a-ac06-bed45f317755","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a76c818-a6d6-4bed-8335-4fc4445085d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"23b50032-1865-4bd3-8d83-5a9c2c2f500e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-583796\" primary control-plane node in \"insufficient-storage-583796\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4dc7a16-e625-4d18-bfed-3dfe00f05b94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c8bcd0b-3fb8-4282-aa81-f22c979c431c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6278336-4154-4e11-9337-12e63d4b9993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-583796 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-583796 --output=json --layout=cluster: exit status 7 (294.750925ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-583796","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-583796","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:20:35.416640  225325 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-583796" does not appear in /home/jenkins/minikube-integration/19576-2220/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-583796 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-583796 --output=json --layout=cluster: exit status 7 (284.804862ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-583796","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-583796","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:20:35.702419  225387 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-583796" does not appear in /home/jenkins/minikube-integration/19576-2220/kubeconfig
	E0906 19:20:35.712927  225387 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/insufficient-storage-583796/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-583796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-583796
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-583796: (1.721977909s)
--- PASS: TestInsufficientStorage (11.42s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2638881566 start -p running-upgrade-201391 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0906 19:26:33.966481    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2638881566 start -p running-upgrade-201391 --memory=2200 --vm-driver=docker  --container-runtime=docker: (39.930902698s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-201391 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-201391 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.344808334s)
helpers_test.go:175: Cleaning up "running-upgrade-201391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-201391
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-201391: (2.291045367s)
--- PASS: TestRunningBinaryUpgrade (79.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (377.13s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-447886 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0906 19:22:46.063088    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:23:18.145666    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-447886 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m4.179149595s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-447886
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-447886: (1.255841476s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-447886 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-447886 status --format={{.Host}}: exit status 7 (94.546164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-447886 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-447886 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m35.59168786s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-447886 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-447886 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-447886 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (105.264485ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-447886] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-447886
	    minikube start -p kubernetes-upgrade-447886 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4478862 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-447886 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-447886 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-447886 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.850385695s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-447886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-447886
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-447886: (2.938850414s)
--- PASS: TestKubernetesUpgrade (377.13s)

                                                
                                    
x
+
TestMissingContainerUpgrade (157.91s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.640186232 start -p missing-upgrade-992409 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.640186232 start -p missing-upgrade-992409 --memory=2200 --driver=docker  --container-runtime=docker: (1m28.516897874s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-992409
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-992409: (10.361180391s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-992409
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-992409 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-992409 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (55.664168653s)
helpers_test.go:175: Cleaning up "missing-upgrade-992409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-992409
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-992409: (2.292834462s)
--- PASS: TestMissingContainerUpgrade (157.91s)

                                                
                                    
x
+
TestPause/serial/Start (82.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-585573 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0906 19:21:21.216579    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-585573 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m22.399988945s)
--- PASS: TestPause/serial/Start (82.40s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-585573 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-585573 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.179663304s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.20s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-585573 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-585573 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-585573 --output=json --layout=cluster: exit status 2 (445.170391ms)

                                                
                                                
-- stdout --
	{"Name":"pause-585573","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-585573","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-585573 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-585573 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.4s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-585573 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-585573 --alsologtostderr -v=5: (2.397500946s)
--- PASS: TestPause/serial/DeletePaused (2.40s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-585573
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-585573: exit status 1 (15.085004ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-585573: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (84.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2212456794 start -p stopped-upgrade-395727 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0906 19:25:12.023575    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:12.033786    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:12.045212    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:12.066584    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:12.108351    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:12.189684    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:12.351351    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:12.672733    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:13.314609    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:14.596895    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:17.158218    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:22.280109    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2212456794 start -p stopped-upgrade-395727 --memory=2200 --vm-driver=docker  --container-runtime=docker: (42.150805622s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2212456794 -p stopped-upgrade-395727 stop
E0906 19:25:32.522352    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2212456794 -p stopped-upgrade-395727 stop: (10.948056484s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-395727 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0906 19:25:49.131786    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:25:53.004453    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-395727 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.513234761s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (84.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-395727
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-395727: (1.680707934s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-842764 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-842764 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (112.326122ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-842764] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-2220/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2220/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-842764 --driver=docker  --container-runtime=docker
E0906 19:28:18.145679    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-842764 --driver=docker  --container-runtime=docker: (45.431534514s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-842764 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-842764 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-842764 --no-kubernetes --driver=docker  --container-runtime=docker: (12.876074193s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-842764 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-842764 status -o json: exit status 2 (365.906964ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-842764","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-842764
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-842764: (1.906021852s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-842764 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-842764 --no-kubernetes --driver=docker  --container-runtime=docker: (8.415088519s)
--- PASS: TestNoKubernetes/serial/Start (8.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-842764 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-842764 "sudo systemctl is-active --quiet service kubelet": exit status 1 (306.555785ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-842764
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-842764: (1.254019735s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (10.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-842764 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-842764 --driver=docker  --container-runtime=docker: (10.322569847s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (10.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-842764 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-842764 "sudo systemctl is-active --quiet service kubelet": exit status 1 (463.772977ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-875398 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0906 19:30:39.730586    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-875398 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m19.112456623s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (139.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-875398 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [885d01e1-cea8-4802-9f81-f741a13de0e7] Pending
helpers_test.go:344: "busybox" [885d01e1-cea8-4802-9f81-f741a13de0e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [885d01e1-cea8-4802-9f81-f741a13de0e7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003771828s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-875398 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-875398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-875398 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.046923582s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-875398 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-875398 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-875398 --alsologtostderr -v=3: (11.286789489s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-875398 -n old-k8s-version-875398
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-875398 -n old-k8s-version-875398: exit status 7 (73.790155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-875398 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (145.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-875398 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0906 19:33:18.146093    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-875398 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m24.77324703s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-875398 -n old-k8s-version-875398
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (145.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-043162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0906 19:35:12.022721    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-043162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (52.864214521s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-043162 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3cfdf679-7c2f-4d6a-b0b8-62a80919dc31] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3cfdf679-7c2f-4d6a-b0b8-62a80919dc31] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004091763s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-043162 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-043162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-043162 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.049676963s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-043162 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-043162 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-043162 --alsologtostderr -v=3: (11.004011616s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dlgsc" [9e98924d-8833-4536-a5a5-e8d8d668d1dc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004926869s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dlgsc" [9e98924d-8833-4536-a5a5-e8d8d668d1dc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00359371s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-875398 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-043162 -n no-preload-043162
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-043162 -n no-preload-043162: exit status 7 (72.204966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-043162 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (294.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-043162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-043162 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m53.720693074s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-043162 -n no-preload-043162
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (294.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-875398 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-875398 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-875398 -n old-k8s-version-875398
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-875398 -n old-k8s-version-875398: exit status 2 (411.438856ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-875398 -n old-k8s-version-875398
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-875398 -n old-k8s-version-875398: exit status 2 (371.493878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-875398 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-875398 -n old-k8s-version-875398
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-875398 -n old-k8s-version-875398
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-320311 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-320311 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (49.643588638s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-320311 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e971ecfb-1ceb-4d42-bf27-ca7974806bd4] Pending
helpers_test.go:344: "busybox" [e971ecfb-1ceb-4d42-bf27-ca7974806bd4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e971ecfb-1ceb-4d42-bf27-ca7974806bd4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003491692s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-320311 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-320311 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-320311 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025333921s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-320311 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-320311 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-320311 --alsologtostderr -v=3: (11.02386151s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-320311 -n embed-certs-320311
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-320311 -n embed-certs-320311: exit status 7 (76.057704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-320311 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-320311 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0906 19:37:46.063763    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:49.500865    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:49.507299    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:49.518605    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:49.539998    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:49.581372    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:49.663413    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:49.824764    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:50.146461    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:50.788322    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:52.070056    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:54.631450    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:59.753681    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:38:01.218650    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:38:09.995790    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:38:18.145445    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:38:30.477134    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:39:11.438993    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:40:12.026942    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:40:33.361069    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-320311 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m26.804704444s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-320311 -n embed-certs-320311
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-cl8ln" [780358d1-5a3e-4040-80f9-d7969e982556] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004176823s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-cl8ln" [780358d1-5a3e-4040-80f9-d7969e982556] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004132718s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-043162 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-043162 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-043162 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-043162 -n no-preload-043162
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-043162 -n no-preload-043162: exit status 2 (357.459636ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-043162 -n no-preload-043162
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-043162 -n no-preload-043162: exit status 2 (320.716538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-043162 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-043162 -n no-preload-043162
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-043162 -n no-preload-043162
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-233349 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-233349 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m14.179383551s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b9p6v" [ca344dfd-06cf-4f90-9926-3b6c9090a677] Running
E0906 19:41:35.092917    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003851604s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-b9p6v" [ca344dfd-06cf-4f90-9926-3b6c9090a677] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0045946s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-320311 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-320311 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-320311 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-320311 -n embed-certs-320311
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-320311 -n embed-certs-320311: exit status 2 (310.987236ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-320311 -n embed-certs-320311
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-320311 -n embed-certs-320311: exit status 2 (318.746751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-320311 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-320311 -n embed-certs-320311
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-320311 -n embed-certs-320311
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-097283 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-097283 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (36.693515991s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-233349 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [809d917c-dfa1-4075-a008-610dc497c523] Pending
helpers_test.go:344: "busybox" [809d917c-dfa1-4075-a008-610dc497c523] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [809d917c-dfa1-4075-a008-610dc497c523] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004495201s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-233349 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-233349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-233349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.116719085s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-233349 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-233349 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-233349 --alsologtostderr -v=3: (10.939942384s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-097283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-097283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.299802694s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-097283 --alsologtostderr -v=3
E0906 19:42:29.136108    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-097283 --alsologtostderr -v=3: (11.117860013s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-233349 -n default-k8s-diff-port-233349
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-233349 -n default-k8s-diff-port-233349: exit status 7 (80.729449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-233349 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-233349 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-233349 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m51.340320801s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-233349 -n default-k8s-diff-port-233349
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (291.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-097283 -n newest-cni-097283
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-097283 -n newest-cni-097283: exit status 7 (87.274714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-097283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-097283 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0906 19:42:46.062912    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:42:49.500334    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-097283 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (25.793072876s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-097283 -n newest-cni-097283
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-097283 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-097283 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-097283 -n newest-cni-097283
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-097283 -n newest-cni-097283: exit status 2 (325.723366ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-097283 -n newest-cni-097283
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-097283 -n newest-cni-097283: exit status 2 (299.79236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-097283 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-097283 -n newest-cni-097283
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-097283 -n newest-cni-097283
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0906 19:43:17.202675    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:43:18.148846    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (45.41942275s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-709464 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-709464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p49hw" [cae3fea3-9572-4e4a-a54d-dd2bc9ef5701] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p49hw" [cae3fea3-9572-4e4a-a54d-dd2bc9ef5701] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004711907s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-709464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0906 19:45:12.022616    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (51.986593996s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nwzhc" [08c0f453-6ed8-4a75-8376-1d46fb764fb8] Running
E0906 19:45:22.327306    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:22.333759    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:22.345273    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:22.366794    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:22.408141    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:22.489594    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:22.651088    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:22.973252    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:23.615606    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:45:24.897641    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003770699s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-709464 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-709464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vshvb" [273da8ca-2c87-4aaf-88fc-68a1bb61c25d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 19:45:27.459025    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vshvb" [273da8ca-2c87-4aaf-88fc-68a1bb61c25d] Running
E0906 19:45:32.581307    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004801956s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-709464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0906 19:46:03.304681    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:46:44.266937    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m10.430273395s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4vdvp" [51198a9e-b6fa-46df-be0f-613860a2b939] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004471805s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-709464 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-709464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4kh67" [e7641c7d-76c4-413e-8bd5-eeae62297ba1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4kh67" [e7641c7d-76c4-413e-8bd5-eeae62297ba1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006839513s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-674lz" [54fd2d9b-957e-438e-952d-af188ccbe6fa] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003902768s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-709464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-674lz" [54fd2d9b-957e-438e-952d-af188ccbe6fa] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004119943s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-233349 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-233349 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-233349 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-233349 --alsologtostderr -v=1: (1.060309506s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-233349 -n default-k8s-diff-port-233349
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-233349 -n default-k8s-diff-port-233349: exit status 2 (446.764208ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-233349 -n default-k8s-diff-port-233349
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-233349 -n default-k8s-diff-port-233349: exit status 2 (465.40523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-233349 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-233349 -n default-k8s-diff-port-233349
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-233349 -n default-k8s-diff-port-233349
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.61s)
E0906 19:52:46.063844    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:49.456181    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:49.500564    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/old-k8s-version-875398/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:50.435299    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:53:04.262616    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0906 19:47:46.063619    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/functional-422075/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m5.510892132s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (86.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0906 19:48:06.189080    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:18.145685    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m26.01176351s)
--- PASS: TestNetworkPlugins/group/false/Start (86.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-709464 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-709464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gt5c7" [488ff6c3-dbee-41ff-9859-b68ad50a40fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gt5c7" [488ff6c3-dbee-41ff-9859-b68ad50a40fa] Running
E0906 19:48:57.795993    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:57.802393    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:57.813815    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:57.835324    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:57.876754    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:57.958168    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:58.120348    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:58.442223    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:48:59.084088    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.03307305s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-709464 exec deployment/netcat -- nslookup kubernetes.default
E0906 19:49:00.365923    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-709464 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-709464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-j8lg6" [bccb4133-1a4f-47e5-befe-0d2bc8643c94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 19:49:18.290849    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-j8lg6" [bccb4133-1a4f-47e5-befe-0d2bc8643c94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.003947725s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m18.893782649s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-709464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (79.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0906 19:50:12.022765    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/skaffold-923947/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:19.735425    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/auto-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:20.404121    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:20.410434    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:20.421764    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:20.443122    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:20.484426    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:20.565802    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:20.727309    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:21.049099    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:21.691134    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:22.326865    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:22.972877    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:25.534459    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:30.655809    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:50:40.897235    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/flannel-709464/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m19.40706271s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (79.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8gw5v" [6d7dc303-b081-425f-844f-01ed3f07d012] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00481617s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-709464 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-709464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5z647" [0f1f96ea-b8b3-44eb-aec5-71b2812433df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 19:50:50.031944    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/no-preload-043162/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-5z647" [0f1f96ea-b8b3-44eb-aec5-71b2812433df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004318448s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-709464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-709464 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-709464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s2v6q" [f62e0077-04a8-4ab7-9a29-579cc1c4205e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s2v6q" [f62e0077-04a8-4ab7-9a29-579cc1c4205e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003448627s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (50.529392124s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-709464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0906 19:52:08.479906    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:08.486230    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:08.497574    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:08.518933    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:08.560263    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:08.642327    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:08.803683    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:09.125016    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:09.453826    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:09.460417    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:09.472473    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:09.494190    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:09.536559    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:09.617935    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:09.766890    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:09.780233    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:10.101975    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:10.743637    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:11.048664    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-709464 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m17.537119145s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-709464 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-709464 replace --force -f testdata/netcat-deployment.yaml
E0906 19:52:12.028312    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fjdx4" [2bffb97c-ffe9-423e-ae7b-08c18b5e7f45] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 19:52:13.610676    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:14.590224    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-fjdx4" [2bffb97c-ffe9-423e-ae7b-08c18b5e7f45] Running
E0906 19:52:18.732230    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/calico-709464/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:52:19.711715    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/default-k8s-diff-port-233349/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.012343865s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-709464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-709464 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-709464 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gzwmj" [76385c17-e4f9-4ae4-a31c-97ee84ec147a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gzwmj" [76385c17-e4f9-4ae4-a31c-97ee84ec147a] Running
E0906 19:53:18.145742    7525 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/addons-724441/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003874178s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-709464 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-709464 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.51s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-333436 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-333436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-333436
--- SKIP: TestDownloadOnlyKic (0.51s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-926448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-926448
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-709464 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-709464" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19576-2220/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 06 Sep 2024 19:28:52 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-842764
contexts:
- context:
cluster: NoKubernetes-842764
extensions:
- extension:
last-update: Fri, 06 Sep 2024 19:28:52 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-842764
name: NoKubernetes-842764
current-context: NoKubernetes-842764
kind: Config
preferences: {}
users:
- name: NoKubernetes-842764
user:
client-certificate: /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/NoKubernetes-842764/client.crt
client-key: /home/jenkins/minikube-integration/19576-2220/.minikube/profiles/NoKubernetes-842764/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-709464

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-709464" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709464"

                                                
                                                
----------------------- debugLogs end: cilium-709464 [took: 3.584160759s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-709464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-709464
--- SKIP: TestNetworkPlugins/group/cilium (3.73s)

                                                
                                    
Copied to clipboard