Test Report: Docker_Linux_docker_arm64 19643

                    
                      17d31f5d116bbb5d9ac8f4a1c2873ea47cdfa40f:2024-09-14:36211
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 74.68
x
+
TestAddons/parallel/Registry (74.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.944602ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-hpzpg" [1dfaea65-f8b7-4b16-a20d-1537cb255324] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006396391s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gc6wz" [fc4eccaf-180b-499a-bd4f-df2cba03caa3] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004106482s
addons_test.go:342: (dbg) Run:  kubectl --context addons-522792 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-522792 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-522792 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.10756662s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-522792 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 ip
2024/09/14 16:57:40 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-522792
helpers_test.go:235: (dbg) docker inspect addons-522792:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "191345bc8c4b26d3b9de5a26519ce1129b893a00c562785ee2277511ed88bb8b",
	        "Created": "2024-09-14T16:44:26.991475344Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8792,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-14T16:44:27.169590995Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:86ef0f8f97fae81f88ea7ff0848cf3d848f7964ac99ca9c948802eb432bfd351",
	        "ResolvConfPath": "/var/lib/docker/containers/191345bc8c4b26d3b9de5a26519ce1129b893a00c562785ee2277511ed88bb8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/191345bc8c4b26d3b9de5a26519ce1129b893a00c562785ee2277511ed88bb8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/191345bc8c4b26d3b9de5a26519ce1129b893a00c562785ee2277511ed88bb8b/hosts",
	        "LogPath": "/var/lib/docker/containers/191345bc8c4b26d3b9de5a26519ce1129b893a00c562785ee2277511ed88bb8b/191345bc8c4b26d3b9de5a26519ce1129b893a00c562785ee2277511ed88bb8b-json.log",
	        "Name": "/addons-522792",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-522792:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-522792",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d8a2df0cfbd8e3af662acb3ce37c4be6750f60cfd2abc17826675969e6c5e7cc-init/diff:/var/lib/docker/overlay2/a80bb518dbf0e3b8f3db84fcceb0858caa21e24a2a4f9965235b89a3827f6877/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d8a2df0cfbd8e3af662acb3ce37c4be6750f60cfd2abc17826675969e6c5e7cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d8a2df0cfbd8e3af662acb3ce37c4be6750f60cfd2abc17826675969e6c5e7cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d8a2df0cfbd8e3af662acb3ce37c4be6750f60cfd2abc17826675969e6c5e7cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-522792",
	                "Source": "/var/lib/docker/volumes/addons-522792/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-522792",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-522792",
	                "name.minikube.sigs.k8s.io": "addons-522792",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b4a9c797f22b8abcfd9587c75a6c0a8c1d109fcc0c43337e7064b613eb28139",
	            "SandboxKey": "/var/run/docker/netns/7b4a9c797f22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-522792": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c7dd1fcbf616a2d456195160aa95da4d47960bdcfff93f1961f74612ec7a6241",
	                    "EndpointID": "185be0303108de835c5ae5fafaa2e962ce38599d75da8fbd2f54209bd665c781",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-522792",
	                        "191345bc8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-522792 -n addons-522792
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-522792 logs -n 25: (1.247204401s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-472444   | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |                     |
	|         | -p download-only-472444                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| delete  | -p download-only-472444                                                                     | download-only-472444   | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| start   | -o=json --download-only                                                                     | download-only-650951   | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |                     |
	|         | -p download-only-650951                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| delete  | -p download-only-650951                                                                     | download-only-650951   | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| delete  | -p download-only-472444                                                                     | download-only-472444   | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| delete  | -p download-only-650951                                                                     | download-only-650951   | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| start   | --download-only -p                                                                          | download-docker-909320 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | download-docker-909320                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-909320                                                                   | download-docker-909320 | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-554567   | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | binary-mirror-554567                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46851                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-554567                                                                     | binary-mirror-554567   | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:44 UTC |
	| addons  | enable dashboard -p                                                                         | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | addons-522792                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC |                     |
	|         | addons-522792                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-522792 --wait=true                                                                | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:44 UTC | 14 Sep 24 16:47 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-522792 addons disable                                                                | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:48 UTC | 14 Sep 24 16:48 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-522792 addons disable                                                                | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:56 UTC | 14 Sep 24 16:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-522792 addons                                                                        | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:57 UTC | 14 Sep 24 16:57 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522792 addons                                                                        | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:57 UTC | 14 Sep 24 16:57 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:57 UTC | 14 Sep 24 16:57 UTC |
	|         | -p addons-522792                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-522792 ssh cat                                                                       | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:57 UTC | 14 Sep 24 16:57 UTC |
	|         | /opt/local-path-provisioner/pvc-23df3344-fb76-4fb6-94e8-b80679b102a4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-522792 addons disable                                                                | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:57 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-522792 ip                                                                            | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:57 UTC | 14 Sep 24 16:57 UTC |
	| addons  | addons-522792 addons disable                                                                | addons-522792          | jenkins | v1.34.0 | 14 Sep 24 16:57 UTC | 14 Sep 24 16:57 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 16:44:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 16:44:01.801937    8295 out.go:345] Setting OutFile to fd 1 ...
	I0914 16:44:01.802292    8295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:01.802304    8295 out.go:358] Setting ErrFile to fd 2...
	I0914 16:44:01.802311    8295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:44:01.802592    8295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
	I0914 16:44:01.803057    8295 out.go:352] Setting JSON to false
	I0914 16:44:01.803802    8295 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1590,"bootTime":1726330652,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0914 16:44:01.803872    8295 start.go:139] virtualization:  
	I0914 16:44:01.806283    8295 out.go:177] * [addons-522792] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 16:44:01.808571    8295 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 16:44:01.808638    8295 notify.go:220] Checking for updates...
	I0914 16:44:01.812758    8295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 16:44:01.814775    8295 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig
	I0914 16:44:01.816714    8295 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube
	I0914 16:44:01.818748    8295 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 16:44:01.821053    8295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 16:44:01.823232    8295 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 16:44:01.848775    8295 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 16:44:01.848896    8295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 16:44:01.905479    8295 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 16:44:01.895722833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 16:44:01.905592    8295 docker.go:318] overlay module found
	I0914 16:44:01.908471    8295 out.go:177] * Using the docker driver based on user configuration
	I0914 16:44:01.910344    8295 start.go:297] selected driver: docker
	I0914 16:44:01.910362    8295 start.go:901] validating driver "docker" against <nil>
	I0914 16:44:01.910375    8295 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 16:44:01.910973    8295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 16:44:01.967760    8295 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 16:44:01.958710196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 16:44:01.967975    8295 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 16:44:01.968201    8295 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 16:44:01.970227    8295 out.go:177] * Using Docker driver with root privileges
	I0914 16:44:01.972153    8295 cni.go:84] Creating CNI manager for ""
	I0914 16:44:01.972226    8295 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 16:44:01.972252    8295 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 16:44:01.972342    8295 start.go:340] cluster config:
	{Name:addons-522792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-522792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:44:01.975692    8295 out.go:177] * Starting "addons-522792" primary control-plane node in "addons-522792" cluster
	I0914 16:44:01.977380    8295 cache.go:121] Beginning downloading kic base image for docker with docker
	I0914 16:44:01.979201    8295 out.go:177] * Pulling base image v0.0.45-1726281268-19643 ...
	I0914 16:44:01.980872    8295 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 16:44:01.980924    8295 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-2222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 16:44:01.980937    8295 cache.go:56] Caching tarball of preloaded images
	I0914 16:44:01.980963    8295 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local docker daemon
	I0914 16:44:01.981031    8295 preload.go:172] Found /home/jenkins/minikube-integration/19643-2222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 16:44:01.981042    8295 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0914 16:44:01.981414    8295 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/config.json ...
	I0914 16:44:01.981446    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/config.json: {Name:mk05b4422888d53cc0e46967dc9452d87e8da5c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:01.995373    8295 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e to local cache
	I0914 16:44:01.995483    8295 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory
	I0914 16:44:01.995508    8295 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory, skipping pull
	I0914 16:44:01.995516    8295 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e exists in cache, skipping pull
	I0914 16:44:01.995525    8295 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e as a tarball
	I0914 16:44:01.995531    8295 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e from local cache
	I0914 16:44:19.324009    8295 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e from cached tarball
	I0914 16:44:19.324040    8295 cache.go:194] Successfully downloaded all kic artifacts
	I0914 16:44:19.324070    8295 start.go:360] acquireMachinesLock for addons-522792: {Name:mkdc1017d7389fd2d2f340695a674163ceff34cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 16:44:19.324193    8295 start.go:364] duration metric: took 104.075µs to acquireMachinesLock for "addons-522792"
	I0914 16:44:19.324223    8295 start.go:93] Provisioning new machine with config: &{Name:addons-522792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-522792 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 16:44:19.324316    8295 start.go:125] createHost starting for "" (driver="docker")
	I0914 16:44:19.326528    8295 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0914 16:44:19.326765    8295 start.go:159] libmachine.API.Create for "addons-522792" (driver="docker")
	I0914 16:44:19.326807    8295 client.go:168] LocalClient.Create starting
	I0914 16:44:19.326925    8295 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19643-2222/.minikube/certs/ca.pem
	I0914 16:44:20.314938    8295 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19643-2222/.minikube/certs/cert.pem
	I0914 16:44:20.844550    8295 cli_runner.go:164] Run: docker network inspect addons-522792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 16:44:20.860035    8295 cli_runner.go:211] docker network inspect addons-522792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 16:44:20.860131    8295 network_create.go:284] running [docker network inspect addons-522792] to gather additional debugging logs...
	I0914 16:44:20.860152    8295 cli_runner.go:164] Run: docker network inspect addons-522792
	W0914 16:44:20.874241    8295 cli_runner.go:211] docker network inspect addons-522792 returned with exit code 1
	I0914 16:44:20.874276    8295 network_create.go:287] error running [docker network inspect addons-522792]: docker network inspect addons-522792: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-522792 not found
	I0914 16:44:20.874291    8295 network_create.go:289] output of [docker network inspect addons-522792]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-522792 not found
	
	** /stderr **
	I0914 16:44:20.874394    8295 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 16:44:20.889417    8295 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000e710}
	I0914 16:44:20.889464    8295 network_create.go:124] attempt to create docker network addons-522792 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0914 16:44:20.889521    8295 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-522792 addons-522792
	I0914 16:44:20.958902    8295 network_create.go:108] docker network addons-522792 192.168.49.0/24 created
	I0914 16:44:20.958937    8295 kic.go:121] calculated static IP "192.168.49.2" for the "addons-522792" container
	I0914 16:44:20.959015    8295 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 16:44:20.973187    8295 cli_runner.go:164] Run: docker volume create addons-522792 --label name.minikube.sigs.k8s.io=addons-522792 --label created_by.minikube.sigs.k8s.io=true
	I0914 16:44:20.991580    8295 oci.go:103] Successfully created a docker volume addons-522792
	I0914 16:44:20.991682    8295 cli_runner.go:164] Run: docker run --rm --name addons-522792-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-522792 --entrypoint /usr/bin/test -v addons-522792:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e -d /var/lib
	I0914 16:44:23.190121    8295 cli_runner.go:217] Completed: docker run --rm --name addons-522792-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-522792 --entrypoint /usr/bin/test -v addons-522792:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e -d /var/lib: (2.198391075s)
	I0914 16:44:23.190149    8295 oci.go:107] Successfully prepared a docker volume addons-522792
	I0914 16:44:23.190179    8295 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 16:44:23.190200    8295 kic.go:194] Starting extracting preloaded images to volume ...
	I0914 16:44:23.190262    8295 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19643-2222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-522792:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 16:44:26.922537    8295 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19643-2222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-522792:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e -I lz4 -xf /preloaded.tar -C /extractDir: (3.732233506s)
	I0914 16:44:26.922571    8295 kic.go:203] duration metric: took 3.732369227s to extract preloaded images to volume ...
	W0914 16:44:26.922711    8295 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 16:44:26.922834    8295 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 16:44:26.976160    8295 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-522792 --name addons-522792 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-522792 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-522792 --network addons-522792 --ip 192.168.49.2 --volume addons-522792:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e
	I0914 16:44:27.334902    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Running}}
	I0914 16:44:27.351901    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:27.377771    8295 cli_runner.go:164] Run: docker exec addons-522792 stat /var/lib/dpkg/alternatives/iptables
	I0914 16:44:27.454955    8295 oci.go:144] the created container "addons-522792" has a running status.
	I0914 16:44:27.454993    8295 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa...
	I0914 16:44:29.050428    8295 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 16:44:29.070002    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:29.087206    8295 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 16:44:29.087229    8295 kic_runner.go:114] Args: [docker exec --privileged addons-522792 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 16:44:29.140360    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:29.157530    8295 machine.go:93] provisionDockerMachine start ...
	I0914 16:44:29.157617    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:29.176251    8295 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:29.176527    8295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0914 16:44:29.176543    8295 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 16:44:29.314409    8295 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-522792
	
	I0914 16:44:29.314431    8295 ubuntu.go:169] provisioning hostname "addons-522792"
	I0914 16:44:29.314497    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:29.334997    8295 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:29.335268    8295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0914 16:44:29.335286    8295 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-522792 && echo "addons-522792" | sudo tee /etc/hostname
	I0914 16:44:29.487314    8295 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-522792
	
	I0914 16:44:29.487456    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:29.505792    8295 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:29.506039    8295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0914 16:44:29.506056    8295 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-522792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-522792/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-522792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 16:44:29.647271    8295 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 16:44:29.647294    8295 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19643-2222/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-2222/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-2222/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-2222/.minikube}
	I0914 16:44:29.647317    8295 ubuntu.go:177] setting up certificates
	I0914 16:44:29.647328    8295 provision.go:84] configureAuth start
	I0914 16:44:29.647405    8295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-522792
	I0914 16:44:29.663911    8295 provision.go:143] copyHostCerts
	I0914 16:44:29.664005    8295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-2222/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-2222/.minikube/ca.pem (1082 bytes)
	I0914 16:44:29.664135    8295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-2222/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-2222/.minikube/cert.pem (1123 bytes)
	I0914 16:44:29.664197    8295 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-2222/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-2222/.minikube/key.pem (1675 bytes)
	I0914 16:44:29.664249    8295 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-2222/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-2222/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-2222/.minikube/certs/ca-key.pem org=jenkins.addons-522792 san=[127.0.0.1 192.168.49.2 addons-522792 localhost minikube]
	I0914 16:44:30.154846    8295 provision.go:177] copyRemoteCerts
	I0914 16:44:30.154924    8295 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 16:44:30.154971    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:30.176363    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:30.276834    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 16:44:30.302741    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 16:44:30.327442    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 16:44:30.352697    8295 provision.go:87] duration metric: took 705.345104ms to configureAuth
	I0914 16:44:30.352729    8295 ubuntu.go:193] setting minikube options for container-runtime
	I0914 16:44:30.352931    8295 config.go:182] Loaded profile config "addons-522792": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 16:44:30.352991    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:30.370877    8295 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:30.371139    8295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0914 16:44:30.371178    8295 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0914 16:44:30.511575    8295 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0914 16:44:30.511599    8295 ubuntu.go:71] root file system type: overlay
	I0914 16:44:30.511723    8295 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0914 16:44:30.511786    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:30.529335    8295 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:30.529584    8295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0914 16:44:30.529669    8295 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0914 16:44:30.680057    8295 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0914 16:44:30.680150    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:30.697213    8295 main.go:141] libmachine: Using SSH client type: native
	I0914 16:44:30.697459    8295 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0914 16:44:30.697482    8295 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0914 16:44:31.490415    8295 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-14 16:44:30.672301452 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0914 16:44:31.490499    8295 machine.go:96] duration metric: took 2.332949243s to provisionDockerMachine
	I0914 16:44:31.490527    8295 client.go:171] duration metric: took 12.16370876s to LocalClient.Create
	I0914 16:44:31.490580    8295 start.go:167] duration metric: took 12.163816206s to libmachine.API.Create "addons-522792"
	I0914 16:44:31.490608    8295 start.go:293] postStartSetup for "addons-522792" (driver="docker")
	I0914 16:44:31.490645    8295 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 16:44:31.490736    8295 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 16:44:31.490810    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:31.509656    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:31.612619    8295 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 16:44:31.616042    8295 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 16:44:31.616083    8295 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 16:44:31.616095    8295 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 16:44:31.616110    8295 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 16:44:31.616125    8295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-2222/.minikube/addons for local assets ...
	I0914 16:44:31.616204    8295 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-2222/.minikube/files for local assets ...
	I0914 16:44:31.616230    8295 start.go:296] duration metric: took 125.599886ms for postStartSetup
	I0914 16:44:31.616547    8295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-522792
	I0914 16:44:31.634311    8295 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/config.json ...
	I0914 16:44:31.634613    8295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 16:44:31.634664    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:31.652608    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:31.748482    8295 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 16:44:31.753729    8295 start.go:128] duration metric: took 12.429396119s to createHost
	I0914 16:44:31.753752    8295 start.go:83] releasing machines lock for "addons-522792", held for 12.429547808s
	I0914 16:44:31.753870    8295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-522792
	I0914 16:44:31.771284    8295 ssh_runner.go:195] Run: cat /version.json
	I0914 16:44:31.771337    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:31.771663    8295 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 16:44:31.771731    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:31.790818    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:31.792163    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:31.886773    8295 ssh_runner.go:195] Run: systemctl --version
	I0914 16:44:32.022008    8295 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 16:44:32.026659    8295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0914 16:44:32.054151    8295 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0914 16:44:32.054238    8295 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 16:44:32.084793    8295 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 16:44:32.084822    8295 start.go:495] detecting cgroup driver to use...
	I0914 16:44:32.084857    8295 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 16:44:32.084960    8295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 16:44:32.103003    8295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0914 16:44:32.113650    8295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 16:44:32.123684    8295 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 16:44:32.123757    8295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 16:44:32.133978    8295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 16:44:32.144288    8295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 16:44:32.154377    8295 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 16:44:32.164366    8295 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 16:44:32.174735    8295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 16:44:32.185523    8295 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 16:44:32.195889    8295 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 16:44:32.206316    8295 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 16:44:32.215194    8295 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 16:44:32.224075    8295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:32.307743    8295 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 16:44:32.419092    8295 start.go:495] detecting cgroup driver to use...
	I0914 16:44:32.419237    8295 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 16:44:32.419308    8295 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0914 16:44:32.438206    8295 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0914 16:44:32.438325    8295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 16:44:32.450870    8295 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 16:44:32.467582    8295 ssh_runner.go:195] Run: which cri-dockerd
	I0914 16:44:32.472203    8295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0914 16:44:32.485975    8295 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0914 16:44:32.506016    8295 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0914 16:44:32.602071    8295 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0914 16:44:32.690595    8295 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0914 16:44:32.690801    8295 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0914 16:44:32.713101    8295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:32.801151    8295 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0914 16:44:33.116445    8295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0914 16:44:33.129498    8295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 16:44:33.142721    8295 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0914 16:44:33.239988    8295 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0914 16:44:33.338019    8295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:33.438909    8295 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0914 16:44:33.454493    8295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0914 16:44:33.466710    8295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:33.560749    8295 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0914 16:44:33.639622    8295 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0914 16:44:33.639733    8295 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0914 16:44:33.644116    8295 start.go:563] Will wait 60s for crictl version
	I0914 16:44:33.644184    8295 ssh_runner.go:195] Run: which crictl
	I0914 16:44:33.650203    8295 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 16:44:33.690968    8295 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0914 16:44:33.691038    8295 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 16:44:33.714885    8295 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0914 16:44:33.740701    8295 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0914 16:44:33.740820    8295 cli_runner.go:164] Run: docker network inspect addons-522792 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 16:44:33.755288    8295 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 16:44:33.758885    8295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 16:44:33.769681    8295 kubeadm.go:883] updating cluster {Name:addons-522792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-522792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 16:44:33.769811    8295 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 16:44:33.769874    8295 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 16:44:33.788916    8295 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 16:44:33.788940    8295 docker.go:615] Images already preloaded, skipping extraction
	I0914 16:44:33.789013    8295 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0914 16:44:33.808310    8295 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0914 16:44:33.808335    8295 cache_images.go:84] Images are preloaded, skipping loading
	I0914 16:44:33.808345    8295 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0914 16:44:33.808437    8295 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-522792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-522792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 16:44:33.808507    8295 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0914 16:44:33.854827    8295 cni.go:84] Creating CNI manager for ""
	I0914 16:44:33.854853    8295 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 16:44:33.854863    8295 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 16:44:33.854883    8295 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-522792 NodeName:addons-522792 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 16:44:33.855016    8295 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-522792"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 16:44:33.855084    8295 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 16:44:33.864224    8295 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 16:44:33.864293    8295 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 16:44:33.873133    8295 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0914 16:44:33.891678    8295 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 16:44:33.910810    8295 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0914 16:44:33.929703    8295 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 16:44:33.933365    8295 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 16:44:33.944990    8295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:34.036059    8295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 16:44:34.052436    8295 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792 for IP: 192.168.49.2
	I0914 16:44:34.052453    8295 certs.go:194] generating shared ca certs ...
	I0914 16:44:34.052469    8295 certs.go:226] acquiring lock for ca certs: {Name:mk2548b3a58ffd4e44ae620cb6b0d26b309d9049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:34.052602    8295 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-2222/.minikube/ca.key
	I0914 16:44:34.340349    8295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-2222/.minikube/ca.crt ...
	I0914 16:44:34.340381    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/ca.crt: {Name:mk1b4b4d4e92d98f432ded83ee2424f056b1f93a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:34.340617    8295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-2222/.minikube/ca.key ...
	I0914 16:44:34.340634    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/ca.key: {Name:mk42a89ced4f571195d59880d4beb3e782ffbaf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:34.340773    8295 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-2222/.minikube/proxy-client-ca.key
	I0914 16:44:35.422528    8295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-2222/.minikube/proxy-client-ca.crt ...
	I0914 16:44:35.422558    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/proxy-client-ca.crt: {Name:mk991b89f08a1044bb2004fa2ea0bae51f476789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:35.422742    8295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-2222/.minikube/proxy-client-ca.key ...
	I0914 16:44:35.422755    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/proxy-client-ca.key: {Name:mkb18f8234d4a7e287a68afde36209797a4c4143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:35.422836    8295 certs.go:256] generating profile certs ...
	I0914 16:44:35.422899    8295 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.key
	I0914 16:44:35.422916    8295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt with IP's: []
	I0914 16:44:35.700084    8295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt ...
	I0914 16:44:35.700117    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: {Name:mkf841a6a3ec0aa2a3f90d0fa02f92327a943f3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:35.700301    8295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.key ...
	I0914 16:44:35.700313    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.key: {Name:mkac6a41bfef7500b10d65a8f654fbe049b94fa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:35.700390    8295 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.key.ecfe55ee
	I0914 16:44:35.700415    8295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.crt.ecfe55ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0914 16:44:35.897586    8295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.crt.ecfe55ee ...
	I0914 16:44:35.897618    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.crt.ecfe55ee: {Name:mk701212413b6a0757c042b8216ccd0285158063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:35.897841    8295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.key.ecfe55ee ...
	I0914 16:44:35.897857    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.key.ecfe55ee: {Name:mk925a53db1fc30f220e1fc33044a375c9a46a27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:35.897950    8295 certs.go:381] copying /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.crt.ecfe55ee -> /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.crt
	I0914 16:44:35.898032    8295 certs.go:385] copying /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.key.ecfe55ee -> /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.key
	I0914 16:44:35.898086    8295 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/proxy-client.key
	I0914 16:44:35.898107    8295 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/proxy-client.crt with IP's: []
	I0914 16:44:36.464743    8295 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/proxy-client.crt ...
	I0914 16:44:36.464776    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/proxy-client.crt: {Name:mk3fad0ce95c9866f522dd9c020359436206a25a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:36.464964    8295 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/proxy-client.key ...
	I0914 16:44:36.464981    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/proxy-client.key: {Name:mk6c359cb7b7dba37899e69a3bb58044fe7af750 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:36.465162    8295 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-2222/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 16:44:36.465206    8295 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-2222/.minikube/certs/ca.pem (1082 bytes)
	I0914 16:44:36.465234    8295 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-2222/.minikube/certs/cert.pem (1123 bytes)
	I0914 16:44:36.465264    8295 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-2222/.minikube/certs/key.pem (1675 bytes)
	I0914 16:44:36.465871    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 16:44:36.491610    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 16:44:36.516672    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 16:44:36.541302    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 16:44:36.566802    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 16:44:36.591121    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 16:44:36.616281    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 16:44:36.640903    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 16:44:36.665277    8295 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-2222/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 16:44:36.690496    8295 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 16:44:36.709337    8295 ssh_runner.go:195] Run: openssl version
	I0914 16:44:36.714764    8295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 16:44:36.724462    8295 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:36.728130    8295 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:36.728199    8295 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 16:44:36.735185    8295 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 16:44:36.745085    8295 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 16:44:36.748558    8295 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 16:44:36.748655    8295 kubeadm.go:392] StartCluster: {Name:addons-522792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-522792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:44:36.748814    8295 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0914 16:44:36.766636    8295 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 16:44:36.777160    8295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 16:44:36.786305    8295 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0914 16:44:36.786369    8295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 16:44:36.795449    8295 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 16:44:36.795470    8295 kubeadm.go:157] found existing configuration files:
	
	I0914 16:44:36.795521    8295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 16:44:36.804914    8295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 16:44:36.804988    8295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 16:44:36.813929    8295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 16:44:36.823514    8295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 16:44:36.823582    8295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 16:44:36.832094    8295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 16:44:36.841145    8295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 16:44:36.841233    8295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 16:44:36.850443    8295 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 16:44:36.859526    8295 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 16:44:36.859610    8295 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 16:44:36.868072    8295 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 16:44:36.910704    8295 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 16:44:36.910765    8295 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 16:44:36.932149    8295 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0914 16:44:36.932226    8295 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0914 16:44:36.932270    8295 kubeadm.go:310] OS: Linux
	I0914 16:44:36.932319    8295 kubeadm.go:310] CGROUPS_CPU: enabled
	I0914 16:44:36.932374    8295 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0914 16:44:36.932444    8295 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0914 16:44:36.932533    8295 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0914 16:44:36.932617    8295 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0914 16:44:36.932727    8295 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0914 16:44:36.932790    8295 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0914 16:44:36.932859    8295 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0914 16:44:36.932921    8295 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0914 16:44:36.995354    8295 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 16:44:36.995525    8295 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 16:44:36.995653    8295 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 16:44:37.014329    8295 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 16:44:37.016815    8295 out.go:235]   - Generating certificates and keys ...
	I0914 16:44:37.017124    8295 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 16:44:37.017210    8295 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 16:44:37.752163    8295 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 16:44:37.942800    8295 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 16:44:38.126523    8295 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 16:44:38.274564    8295 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 16:44:38.719416    8295 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 16:44:38.719659    8295 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-522792 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 16:44:39.239018    8295 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 16:44:39.239298    8295 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-522792 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 16:44:39.415622    8295 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 16:44:40.146613    8295 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 16:44:41.129805    8295 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 16:44:41.129959    8295 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 16:44:41.479020    8295 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 16:44:42.515699    8295 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 16:44:43.554182    8295 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 16:44:44.804777    8295 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 16:44:45.922020    8295 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 16:44:45.922906    8295 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 16:44:45.926043    8295 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 16:44:45.928155    8295 out.go:235]   - Booting up control plane ...
	I0914 16:44:45.928258    8295 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 16:44:45.928336    8295 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 16:44:45.929378    8295 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 16:44:45.941555    8295 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 16:44:45.947566    8295 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 16:44:45.947904    8295 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 16:44:46.052913    8295 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 16:44:46.053038    8295 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 16:44:47.053618    8295 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001123037s
	I0914 16:44:47.053705    8295 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 16:44:53.554727    8295 kubeadm.go:310] [api-check] The API server is healthy after 6.501121054s
	I0914 16:44:53.575750    8295 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 16:44:53.591563    8295 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 16:44:53.622678    8295 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 16:44:53.623117    8295 kubeadm.go:310] [mark-control-plane] Marking the node addons-522792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 16:44:53.640756    8295 kubeadm.go:310] [bootstrap-token] Using token: 8q88ss.j8jorv0mm0xh3akf
	I0914 16:44:53.643618    8295 out.go:235]   - Configuring RBAC rules ...
	I0914 16:44:53.643740    8295 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 16:44:53.652139    8295 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 16:44:53.660757    8295 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 16:44:53.666504    8295 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 16:44:53.670231    8295 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 16:44:53.674442    8295 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 16:44:53.961296    8295 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 16:44:54.388906    8295 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 16:44:54.961586    8295 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 16:44:54.962839    8295 kubeadm.go:310] 
	I0914 16:44:54.962915    8295 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 16:44:54.962926    8295 kubeadm.go:310] 
	I0914 16:44:54.963002    8295 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 16:44:54.963014    8295 kubeadm.go:310] 
	I0914 16:44:54.963041    8295 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 16:44:54.963109    8295 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 16:44:54.963261    8295 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 16:44:54.963276    8295 kubeadm.go:310] 
	I0914 16:44:54.963331    8295 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 16:44:54.963340    8295 kubeadm.go:310] 
	I0914 16:44:54.963388    8295 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 16:44:54.963396    8295 kubeadm.go:310] 
	I0914 16:44:54.963447    8295 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 16:44:54.963525    8295 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 16:44:54.963596    8295 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 16:44:54.963605    8295 kubeadm.go:310] 
	I0914 16:44:54.963688    8295 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 16:44:54.963770    8295 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 16:44:54.963780    8295 kubeadm.go:310] 
	I0914 16:44:54.963863    8295 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8q88ss.j8jorv0mm0xh3akf \
	I0914 16:44:54.963969    8295 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0304bc25387089883d68ff7258f12617a030e0d99b54ee11bd72f6521a445aa \
	I0914 16:44:54.963994    8295 kubeadm.go:310] 	--control-plane 
	I0914 16:44:54.964005    8295 kubeadm.go:310] 
	I0914 16:44:54.964088    8295 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 16:44:54.964097    8295 kubeadm.go:310] 
	I0914 16:44:54.964177    8295 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8q88ss.j8jorv0mm0xh3akf \
	I0914 16:44:54.964283    8295 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d0304bc25387089883d68ff7258f12617a030e0d99b54ee11bd72f6521a445aa 
	I0914 16:44:54.968195    8295 kubeadm.go:310] W0914 16:44:36.905702    1815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 16:44:54.968489    8295 kubeadm.go:310] W0914 16:44:36.907144    1815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 16:44:54.968703    8295 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0914 16:44:54.968812    8295 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 16:44:54.968830    8295 cni.go:84] Creating CNI manager for ""
	I0914 16:44:54.968845    8295 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 16:44:54.971237    8295 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0914 16:44:54.973194    8295 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0914 16:44:54.981931    8295 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0914 16:44:55.002206    8295 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 16:44:55.002337    8295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:55.002412    8295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-522792 minikube.k8s.io/updated_at=2024_09_14T16_44_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=addons-522792 minikube.k8s.io/primary=true
	I0914 16:44:55.178489    8295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:55.178559    8295 ops.go:34] apiserver oom_adj: -16
	I0914 16:44:55.678569    8295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:56.178930    8295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:56.679419    8295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:57.179461    8295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:57.678698    8295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:58.178637    8295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:58.678991    8295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 16:44:58.769807    8295 kubeadm.go:1113] duration metric: took 3.767518042s to wait for elevateKubeSystemPrivileges
	I0914 16:44:58.769843    8295 kubeadm.go:394] duration metric: took 22.021193198s to StartCluster
	I0914 16:44:58.769861    8295 settings.go:142] acquiring lock: {Name:mk7068a6207915f025cb2a19e02893c689ceddfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:58.769977    8295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-2222/kubeconfig
	I0914 16:44:58.770361    8295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/kubeconfig: {Name:mk2bcaacb32436e1047ebb87e69cae0c72e33743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:44:58.770551    8295 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0914 16:44:58.770671    8295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 16:44:58.770901    8295 config.go:182] Loaded profile config "addons-522792": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 16:44:58.770938    8295 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0914 16:44:58.771025    8295 addons.go:69] Setting yakd=true in profile "addons-522792"
	I0914 16:44:58.771041    8295 addons.go:234] Setting addon yakd=true in "addons-522792"
	I0914 16:44:58.771066    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.771576    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.772006    8295 addons.go:69] Setting cloud-spanner=true in profile "addons-522792"
	I0914 16:44:58.772025    8295 addons.go:234] Setting addon cloud-spanner=true in "addons-522792"
	I0914 16:44:58.772048    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.772054    8295 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-522792"
	I0914 16:44:58.772073    8295 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-522792"
	I0914 16:44:58.772097    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.772459    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.772529    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.775658    8295 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-522792"
	I0914 16:44:58.775773    8295 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-522792"
	I0914 16:44:58.775846    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.776486    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.778502    8295 addons.go:69] Setting registry=true in profile "addons-522792"
	I0914 16:44:58.778574    8295 addons.go:234] Setting addon registry=true in "addons-522792"
	I0914 16:44:58.778624    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.778662    8295 addons.go:69] Setting ingress-dns=true in profile "addons-522792"
	I0914 16:44:58.778686    8295 addons.go:234] Setting addon ingress-dns=true in "addons-522792"
	I0914 16:44:58.778742    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.779216    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.779310    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.781531    8295 addons.go:69] Setting inspektor-gadget=true in profile "addons-522792"
	I0914 16:44:58.781572    8295 addons.go:234] Setting addon inspektor-gadget=true in "addons-522792"
	I0914 16:44:58.781613    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.782076    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.778644    8295 addons.go:69] Setting default-storageclass=true in profile "addons-522792"
	I0914 16:44:58.798560    8295 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-522792"
	I0914 16:44:58.800518    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.778652    8295 addons.go:69] Setting gcp-auth=true in profile "addons-522792"
	I0914 16:44:58.812929    8295 mustload.go:65] Loading cluster: addons-522792
	I0914 16:44:58.813140    8295 config.go:182] Loaded profile config "addons-522792": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 16:44:58.813406    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.778658    8295 addons.go:69] Setting ingress=true in profile "addons-522792"
	I0914 16:44:58.819530    8295 addons.go:234] Setting addon ingress=true in "addons-522792"
	I0914 16:44:58.819610    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.822015    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.790769    8295 addons.go:69] Setting storage-provisioner=true in profile "addons-522792"
	I0914 16:44:58.829246    8295 addons.go:234] Setting addon storage-provisioner=true in "addons-522792"
	I0914 16:44:58.829287    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.829898    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.790784    8295 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-522792"
	I0914 16:44:58.835796    8295 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-522792"
	I0914 16:44:58.836145    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.802323    8295 addons.go:69] Setting metrics-server=true in profile "addons-522792"
	I0914 16:44:58.840582    8295 addons.go:234] Setting addon metrics-server=true in "addons-522792"
	I0914 16:44:58.840631    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.841105    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.790794    8295 addons.go:69] Setting volcano=true in profile "addons-522792"
	I0914 16:44:58.855603    8295 addons.go:234] Setting addon volcano=true in "addons-522792"
	I0914 16:44:58.855644    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.856120    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.790801    8295 addons.go:69] Setting volumesnapshots=true in profile "addons-522792"
	I0914 16:44:58.856277    8295 addons.go:234] Setting addon volumesnapshots=true in "addons-522792"
	I0914 16:44:58.856297    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.856687    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:58.790860    8295 out.go:177] * Verifying Kubernetes components...
	I0914 16:44:58.923609    8295 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0914 16:44:58.933191    8295 out.go:177]   - Using image docker.io/registry:2.8.3
	I0914 16:44:58.935829    8295 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 16:44:58.935894    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0914 16:44:58.935989    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:58.965874    8295 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 16:44:58.966271    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:58.995549    8295 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0914 16:44:58.996335    8295 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0914 16:44:58.998647    8295 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 16:44:58.998956    8295 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0914 16:44:58.998970    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 16:44:58.999029    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.019243    8295 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0914 16:44:59.019594    8295 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0914 16:44:59.019610    8295 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0914 16:44:59.019682    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.024716    8295 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 16:44:59.024807    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0914 16:44:59.024922    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.048852    8295 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 16:44:59.050782    8295 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 16:44:59.052735    8295 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 16:44:59.052838    8295 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0914 16:44:59.054754    8295 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 16:44:59.054912    8295 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 16:44:59.054939    8295 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 16:44:59.055053    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.058175    8295 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 16:44:59.059869    8295 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 16:44:59.061618    8295 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 16:44:59.063937    8295 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 16:44:59.063961    8295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 16:44:59.064040    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.072306    8295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 16:44:59.086040    8295 addons.go:234] Setting addon default-storageclass=true in "addons-522792"
	I0914 16:44:59.086117    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:59.086524    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:59.100924    8295 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-522792"
	I0914 16:44:59.100966    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:44:59.101368    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:44:59.116415    8295 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 16:44:59.116656    8295 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0914 16:44:59.120220    8295 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0914 16:44:59.120385    8295 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 16:44:59.120315    8295 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0914 16:44:59.122616    8295 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:44:59.123140    8295 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 16:44:59.123841    8295 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 16:44:59.123909    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.126524    8295 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:44:59.126543    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 16:44:59.126604    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.123700    8295 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 16:44:59.141157    8295 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 16:44:59.141232    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.157638    8295 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0914 16:44:59.163348    8295 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0914 16:44:59.171239    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.177856    8295 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0914 16:44:59.123808    8295 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 16:44:59.178302    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 16:44:59.178374    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.181778    8295 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0914 16:44:59.182743    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0914 16:44:59.182843    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.205349    8295 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:44:59.208598    8295 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 16:44:59.208623    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0914 16:44:59.208686    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.225862    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.275293    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.287280    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.316322    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.316436    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.343560    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.362801    8295 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 16:44:59.362830    8295 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 16:44:59.362892    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.364266    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.405067    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.407590    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.410348    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.413640    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.416500    8295 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0914 16:44:59.420750    8295 out.go:177]   - Using image docker.io/busybox:stable
	I0914 16:44:59.423077    8295 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 16:44:59.423099    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0914 16:44:59.423294    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:44:59.439044    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.461132    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:44:59.469656    8295 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 16:44:59.985127    8295 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 16:44:59.985206    8295 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 16:44:59.997028    8295 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0914 16:44:59.997094    8295 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0914 16:45:00.027759    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 16:45:00.085371    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 16:45:00.117935    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 16:45:00.157879    8295 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0914 16:45:00.157917    8295 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0914 16:45:00.381620    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 16:45:00.465620    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 16:45:00.476558    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 16:45:00.543632    8295 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 16:45:00.543662    8295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 16:45:00.649247    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0914 16:45:00.690992    8295 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 16:45:00.691057    8295 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 16:45:00.856644    8295 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 16:45:00.856684    8295 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 16:45:00.933615    8295 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0914 16:45:00.933661    8295 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0914 16:45:00.983841    8295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 16:45:00.983877    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 16:45:01.038599    8295 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 16:45:01.038637    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 16:45:01.051030    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 16:45:01.335186    8295 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 16:45:01.335214    8295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 16:45:01.339039    8295 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 16:45:01.339065    8295 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 16:45:01.341286    8295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 16:45:01.341336    8295 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 16:45:01.359903    8295 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 16:45:01.359941    8295 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 16:45:01.472486    8295 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0914 16:45:01.472511    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0914 16:45:01.496728    8295 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 16:45:01.496774    8295 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 16:45:01.513168    8295 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 16:45:01.513209    8295 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 16:45:01.540731    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 16:45:01.572829    8295 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 16:45:01.572858    8295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 16:45:01.580005    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0914 16:45:01.617526    8295 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 16:45:01.617552    8295 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 16:45:01.763229    8295 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 16:45:01.763277    8295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 16:45:01.837402    8295 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 16:45:01.837430    8295 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 16:45:01.897968    8295 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.825586833s)
	I0914 16:45:01.897996    8295 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0914 16:45:01.899045    8295 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.429366453s)
	I0914 16:45:01.899863    8295 node_ready.go:35] waiting up to 6m0s for node "addons-522792" to be "Ready" ...
	I0914 16:45:01.907886    8295 node_ready.go:49] node "addons-522792" has status "Ready":"True"
	I0914 16:45:01.907965    8295 node_ready.go:38] duration metric: took 8.083796ms for node "addons-522792" to be "Ready" ...
	I0914 16:45:01.907990    8295 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 16:45:01.924002    8295 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:02.094938    8295 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 16:45:02.095026    8295 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 16:45:02.119355    8295 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 16:45:02.119431    8295 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 16:45:02.204016    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 16:45:02.311663    8295 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 16:45:02.311739    8295 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 16:45:02.369415    8295 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 16:45:02.369493    8295 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 16:45:02.378326    8295 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:02.378400    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 16:45:02.402221    8295 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-522792" context rescaled to 1 replicas
	I0914 16:45:02.484911    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:02.601572    8295 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 16:45:02.601657    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 16:45:02.690433    8295 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 16:45:02.690510    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0914 16:45:02.987668    8295 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 16:45:02.987745    8295 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 16:45:03.072312    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 16:45:03.187185    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.159308443s)
	I0914 16:45:03.336420    8295 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 16:45:03.336492    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 16:45:03.716498    8295 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 16:45:03.716571    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 16:45:03.994037    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:04.226348    8295 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 16:45:04.226422    8295 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 16:45:05.172874    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 16:45:06.081839    8295 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 16:45:06.082019    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:45:06.111835    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:45:06.432976    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:07.115733    8295 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 16:45:07.430647    8295 addons.go:234] Setting addon gcp-auth=true in "addons-522792"
	I0914 16:45:07.430744    8295 host.go:66] Checking if "addons-522792" exists ...
	I0914 16:45:07.431494    8295 cli_runner.go:164] Run: docker container inspect addons-522792 --format={{.State.Status}}
	I0914 16:45:07.451075    8295 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 16:45:07.451144    8295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522792
	I0914 16:45:07.475007    8295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/addons-522792/id_rsa Username:docker}
	I0914 16:45:08.466796    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:09.363557    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.981896789s)
	I0914 16:45:09.363611    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.897946553s)
	I0914 16:45:09.363796    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.887209211s)
	I0914 16:45:09.363869    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.245474971s)
	I0914 16:45:09.364061    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.278651555s)
	I0914 16:45:09.364113    8295 addons.go:475] Verifying addon ingress=true in "addons-522792"
	I0914 16:45:09.366236    8295 out.go:177] * Verifying ingress addon...
	I0914 16:45:09.368855    8295 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0914 16:45:09.385456    8295 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0914 16:45:09.388760    8295 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 16:45:09.388833    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:09.874926    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:10.375822    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:10.928924    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:10.945340    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:11.377120    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:11.904851    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.255563983s)
	I0914 16:45:11.905028    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.853961251s)
	I0914 16:45:11.905125    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.364364138s)
	I0914 16:45:11.905157    8295 addons.go:475] Verifying addon registry=true in "addons-522792"
	I0914 16:45:11.905356    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.325316078s)
	I0914 16:45:11.905698    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.701604131s)
	I0914 16:45:11.905718    8295 addons.go:475] Verifying addon metrics-server=true in "addons-522792"
	I0914 16:45:11.905798    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.420811331s)
	W0914 16:45:11.905815    8295 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 16:45:11.905831    8295 retry.go:31] will retry after 272.473728ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 16:45:11.905900    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.833508355s)
	I0914 16:45:11.907400    8295 out.go:177] * Verifying registry addon...
	I0914 16:45:11.907597    8295 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-522792 service yakd-dashboard -n yakd-dashboard
	
	I0914 16:45:11.910554    8295 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 16:45:11.937631    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:11.947092    8295 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 16:45:11.947208    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:12.179295    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 16:45:12.388114    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:12.490329    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:12.873857    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:12.915716    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:12.949390    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.776415183s)
	I0914 16:45:12.949472    8295 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-522792"
	I0914 16:45:12.949735    8295 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.498633584s)
	I0914 16:45:12.952260    8295 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 16:45:12.952264    8295 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 16:45:12.955434    8295 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0914 16:45:12.956440    8295 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 16:45:12.957465    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:12.957670    8295 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 16:45:12.957713    8295 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 16:45:12.974656    8295 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 16:45:12.974743    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:13.024821    8295 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 16:45:13.024894    8295 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 16:45:13.058559    8295 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 16:45:13.058630    8295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0914 16:45:13.107107    8295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 16:45:13.373503    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:13.414471    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:13.469645    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:13.874502    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:13.916646    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:13.962255    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:14.373085    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:14.414532    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:14.481687    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:14.654562    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.5473571s)
	I0914 16:45:14.654994    8295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.475601797s)
	I0914 16:45:14.657910    8295 addons.go:475] Verifying addon gcp-auth=true in "addons-522792"
	I0914 16:45:14.660193    8295 out.go:177] * Verifying gcp-auth addon...
	I0914 16:45:14.662929    8295 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 16:45:14.665833    8295 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 16:45:14.874049    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:14.976061    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:14.978281    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:15.373532    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:15.414238    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:15.430402    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:15.462265    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:15.878240    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:15.914816    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:15.962060    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:16.374386    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:16.415013    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:16.461692    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:16.873794    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:16.915076    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:16.974795    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:17.373211    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:17.414860    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:17.431020    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:17.461234    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:17.874073    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:17.914655    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:17.962848    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:18.374166    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:18.414537    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:18.461553    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:18.875268    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:18.916380    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:18.962847    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:19.374127    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:19.414856    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:19.431887    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:19.461368    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:19.873543    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:19.915010    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:19.974357    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:20.374153    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:20.414745    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:20.465882    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:20.873797    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:20.914338    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:20.961068    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:21.374243    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:21.414945    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:21.432104    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:21.462616    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:21.873919    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:21.914503    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:21.962560    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:22.374432    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:22.415191    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:22.462310    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:22.873126    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:22.914726    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:22.961708    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:23.373593    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:23.414210    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:23.461415    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:23.874118    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:23.915705    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:23.930601    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:23.960965    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:24.373434    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:24.414610    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:24.461035    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:24.874131    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:24.915575    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:24.961644    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:25.373679    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:25.414088    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:25.461980    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:25.873265    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:25.915650    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:25.930899    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:25.961090    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:26.374719    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:26.414426    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:26.461728    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:26.873267    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:26.914938    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:26.961945    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:27.373556    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:27.414020    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:27.461672    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:27.873195    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:27.915208    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:27.931839    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:27.962417    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:28.373770    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:28.415080    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:28.463976    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:28.874133    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:28.921698    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:28.962204    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:29.375238    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:29.414754    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:29.476221    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:29.873524    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:29.915135    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:29.961503    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:30.374379    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:30.422835    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:30.432380    8295 pod_ready.go:103] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"False"
	I0914 16:45:30.517721    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:30.879636    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:30.915779    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:30.962164    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:31.374234    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:31.416104    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:31.430655    8295 pod_ready.go:93] pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:31.430727    8295 pod_ready.go:82] duration metric: took 29.50664949s for pod "coredns-7c65d6cfc9-2qzgj" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:31.430756    8295 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-58jmz" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:31.433203    8295 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-58jmz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-58jmz" not found
	I0914 16:45:31.433267    8295 pod_ready.go:82] duration metric: took 2.490206ms for pod "coredns-7c65d6cfc9-58jmz" in "kube-system" namespace to be "Ready" ...
	E0914 16:45:31.433291    8295 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-58jmz" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-58jmz" not found
	I0914 16:45:31.433314    8295 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-522792" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:31.440077    8295 pod_ready.go:93] pod "etcd-addons-522792" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:31.440146    8295 pod_ready.go:82] duration metric: took 6.797677ms for pod "etcd-addons-522792" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:31.440173    8295 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-522792" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:31.446395    8295 pod_ready.go:93] pod "kube-apiserver-addons-522792" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:31.446459    8295 pod_ready.go:82] duration metric: took 6.265129ms for pod "kube-apiserver-addons-522792" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:31.446486    8295 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-522792" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:31.453290    8295 pod_ready.go:93] pod "kube-controller-manager-addons-522792" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:31.453361    8295 pod_ready.go:82] duration metric: took 6.853538ms for pod "kube-controller-manager-addons-522792" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:31.453388    8295 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m67s8" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:31.461635    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:31.628463    8295 pod_ready.go:93] pod "kube-proxy-m67s8" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:31.628537    8295 pod_ready.go:82] duration metric: took 175.128165ms for pod "kube-proxy-m67s8" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:31.628562    8295 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-522792" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:31.873591    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:31.914651    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:31.961336    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:32.033215    8295 pod_ready.go:93] pod "kube-scheduler-addons-522792" in "kube-system" namespace has status "Ready":"True"
	I0914 16:45:32.033244    8295 pod_ready.go:82] duration metric: took 404.659665ms for pod "kube-scheduler-addons-522792" in "kube-system" namespace to be "Ready" ...
	I0914 16:45:32.033259    8295 pod_ready.go:39] duration metric: took 30.125243479s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 16:45:32.033284    8295 api_server.go:52] waiting for apiserver process to appear ...
	I0914 16:45:32.033376    8295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 16:45:32.051584    8295 api_server.go:72] duration metric: took 33.280996006s to wait for apiserver process to appear ...
	I0914 16:45:32.051612    8295 api_server.go:88] waiting for apiserver healthz status ...
	I0914 16:45:32.051633    8295 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 16:45:32.062116    8295 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 16:45:32.063622    8295 api_server.go:141] control plane version: v1.31.1
	I0914 16:45:32.063661    8295 api_server.go:131] duration metric: took 12.036117ms to wait for apiserver health ...
	I0914 16:45:32.063672    8295 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 16:45:32.235912    8295 system_pods.go:59] 17 kube-system pods found
	I0914 16:45:32.235999    8295 system_pods.go:61] "coredns-7c65d6cfc9-2qzgj" [36dbb75b-4727-4167-b5d2-5adc385ca7b7] Running
	I0914 16:45:32.236024    8295 system_pods.go:61] "csi-hostpath-attacher-0" [9b2e55d3-5c3a-42cf-877e-aaa798adc23f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 16:45:32.236067    8295 system_pods.go:61] "csi-hostpath-resizer-0" [3dfd1543-4343-46f3-b8aa-da3002ab4106] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 16:45:32.236095    8295 system_pods.go:61] "csi-hostpathplugin-f8pxq" [0d3a6134-ec58-4ef6-98d8-4408cd3926b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 16:45:32.236117    8295 system_pods.go:61] "etcd-addons-522792" [85d15929-54c1-49aa-b70b-a30fa63171aa] Running
	I0914 16:45:32.236153    8295 system_pods.go:61] "kube-apiserver-addons-522792" [6bbed219-451a-4cad-aa00-b1871b22f589] Running
	I0914 16:45:32.236178    8295 system_pods.go:61] "kube-controller-manager-addons-522792" [11f83459-36cd-477d-8910-bf6e94f2bb13] Running
	I0914 16:45:32.236198    8295 system_pods.go:61] "kube-ingress-dns-minikube" [99098681-4871-4c78-a193-c002effb4dfb] Running
	I0914 16:45:32.236232    8295 system_pods.go:61] "kube-proxy-m67s8" [9666fb5b-276d-4c48-b601-0cb2dec53c3b] Running
	I0914 16:45:32.236256    8295 system_pods.go:61] "kube-scheduler-addons-522792" [5be0f6bd-e2d9-4a1e-a232-3fb0b61f325b] Running
	I0914 16:45:32.236279    8295 system_pods.go:61] "metrics-server-84c5f94fbc-9n99x" [ae800b4d-af12-4463-8d38-890543f21aef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 16:45:32.236315    8295 system_pods.go:61] "nvidia-device-plugin-daemonset-57q46" [2c5bc5e1-fe9b-4b0a-9675-a3bba75998e6] Running
	I0914 16:45:32.236340    8295 system_pods.go:61] "registry-66c9cd494c-hpzpg" [1dfaea65-f8b7-4b16-a20d-1537cb255324] Running
	I0914 16:45:32.236362    8295 system_pods.go:61] "registry-proxy-gc6wz" [fc4eccaf-180b-499a-bd4f-df2cba03caa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 16:45:32.236396    8295 system_pods.go:61] "snapshot-controller-56fcc65765-8ffl4" [20b24244-4b1f-4fb6-9d9a-bc326a3d2ac0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:32.236423    8295 system_pods.go:61] "snapshot-controller-56fcc65765-bvnj9" [e5710355-cb19-4e6d-9aa0-31f4c0795e77] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:32.236444    8295 system_pods.go:61] "storage-provisioner" [86dfd26f-c18e-4f27-94c6-c3aa3241b40b] Running
	I0914 16:45:32.236480    8295 system_pods.go:74] duration metric: took 172.800496ms to wait for pod list to return data ...
	I0914 16:45:32.236505    8295 default_sa.go:34] waiting for default service account to be created ...
	I0914 16:45:32.373032    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:32.414442    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:32.427644    8295 default_sa.go:45] found service account: "default"
	I0914 16:45:32.427667    8295 default_sa.go:55] duration metric: took 191.143752ms for default service account to be created ...
	I0914 16:45:32.427678    8295 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 16:45:32.461128    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:32.645056    8295 system_pods.go:86] 17 kube-system pods found
	I0914 16:45:32.645090    8295 system_pods.go:89] "coredns-7c65d6cfc9-2qzgj" [36dbb75b-4727-4167-b5d2-5adc385ca7b7] Running
	I0914 16:45:32.645102    8295 system_pods.go:89] "csi-hostpath-attacher-0" [9b2e55d3-5c3a-42cf-877e-aaa798adc23f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 16:45:32.645109    8295 system_pods.go:89] "csi-hostpath-resizer-0" [3dfd1543-4343-46f3-b8aa-da3002ab4106] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 16:45:32.645117    8295 system_pods.go:89] "csi-hostpathplugin-f8pxq" [0d3a6134-ec58-4ef6-98d8-4408cd3926b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 16:45:32.645122    8295 system_pods.go:89] "etcd-addons-522792" [85d15929-54c1-49aa-b70b-a30fa63171aa] Running
	I0914 16:45:32.645127    8295 system_pods.go:89] "kube-apiserver-addons-522792" [6bbed219-451a-4cad-aa00-b1871b22f589] Running
	I0914 16:45:32.645137    8295 system_pods.go:89] "kube-controller-manager-addons-522792" [11f83459-36cd-477d-8910-bf6e94f2bb13] Running
	I0914 16:45:32.645143    8295 system_pods.go:89] "kube-ingress-dns-minikube" [99098681-4871-4c78-a193-c002effb4dfb] Running
	I0914 16:45:32.645152    8295 system_pods.go:89] "kube-proxy-m67s8" [9666fb5b-276d-4c48-b601-0cb2dec53c3b] Running
	I0914 16:45:32.645157    8295 system_pods.go:89] "kube-scheduler-addons-522792" [5be0f6bd-e2d9-4a1e-a232-3fb0b61f325b] Running
	I0914 16:45:32.645163    8295 system_pods.go:89] "metrics-server-84c5f94fbc-9n99x" [ae800b4d-af12-4463-8d38-890543f21aef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 16:45:32.645172    8295 system_pods.go:89] "nvidia-device-plugin-daemonset-57q46" [2c5bc5e1-fe9b-4b0a-9675-a3bba75998e6] Running
	I0914 16:45:32.645178    8295 system_pods.go:89] "registry-66c9cd494c-hpzpg" [1dfaea65-f8b7-4b16-a20d-1537cb255324] Running
	I0914 16:45:32.645185    8295 system_pods.go:89] "registry-proxy-gc6wz" [fc4eccaf-180b-499a-bd4f-df2cba03caa3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 16:45:32.645195    8295 system_pods.go:89] "snapshot-controller-56fcc65765-8ffl4" [20b24244-4b1f-4fb6-9d9a-bc326a3d2ac0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:32.645202    8295 system_pods.go:89] "snapshot-controller-56fcc65765-bvnj9" [e5710355-cb19-4e6d-9aa0-31f4c0795e77] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 16:45:32.645206    8295 system_pods.go:89] "storage-provisioner" [86dfd26f-c18e-4f27-94c6-c3aa3241b40b] Running
	I0914 16:45:32.645215    8295 system_pods.go:126] duration metric: took 217.531266ms to wait for k8s-apps to be running ...
	I0914 16:45:32.645226    8295 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 16:45:32.645282    8295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 16:45:32.665925    8295 system_svc.go:56] duration metric: took 20.690773ms WaitForService to wait for kubelet
	I0914 16:45:32.665953    8295 kubeadm.go:582] duration metric: took 33.895370703s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 16:45:32.665971    8295 node_conditions.go:102] verifying NodePressure condition ...
	I0914 16:45:32.828605    8295 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 16:45:32.828636    8295 node_conditions.go:123] node cpu capacity is 2
	I0914 16:45:32.828649    8295 node_conditions.go:105] duration metric: took 162.673437ms to run NodePressure ...
	I0914 16:45:32.828662    8295 start.go:241] waiting for startup goroutines ...
	I0914 16:45:32.874240    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:32.914478    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:32.960769    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:33.374230    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:33.414805    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:33.461623    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:33.873934    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:33.914389    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:33.962264    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:34.373211    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:34.414459    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:34.461707    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:34.874580    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:34.914598    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:34.961771    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:35.374725    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:35.414943    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 16:45:35.462259    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:35.874337    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:35.915632    8295 kapi.go:107] duration metric: took 24.005094301s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 16:45:35.962610    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:36.373667    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:36.461094    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:36.875034    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:36.962288    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:37.373105    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:37.460847    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:37.874813    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:37.975208    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:38.374282    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:38.462687    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:38.873956    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:38.962403    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:39.373010    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:39.462057    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:39.873925    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:39.964893    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:40.373987    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:40.461336    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:40.874150    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:40.976298    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:41.374865    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:41.464045    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:41.874432    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:41.962334    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:42.374631    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:42.462762    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:42.880665    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:42.961938    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:43.374234    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:43.462859    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:43.875219    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:43.962549    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:44.373965    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:44.462303    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:44.875514    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:44.960988    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:45.376493    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:45.461839    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:45.874236    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:45.975411    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:46.375136    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:46.475300    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:46.873630    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:46.961879    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:47.373484    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:47.461752    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:47.873956    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:47.961214    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:48.374056    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:48.461853    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:48.917271    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:48.984422    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:49.373949    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:49.462619    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:49.874213    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:49.962277    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:50.376271    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:50.462214    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:50.873669    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:50.963288    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:51.375749    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:51.466282    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:51.873714    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:51.975266    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:52.373250    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:52.462189    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:52.874684    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:52.961090    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:53.373810    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:53.462245    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:53.872743    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:53.962441    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:54.374963    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:54.477338    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:54.873053    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:54.961724    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:55.378745    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:55.477612    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:55.874215    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:55.977085    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:56.374468    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:56.479349    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:56.874175    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:56.962020    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:57.373626    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:57.461133    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:57.876839    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:57.962967    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:58.374047    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:58.462036    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:58.877704    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:58.961621    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:59.373656    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:59.463416    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:45:59.873826    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:45:59.961862    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:00.375032    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:00.462952    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:00.874044    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:00.961442    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:01.375017    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:01.461893    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:01.874311    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:01.975309    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:02.373667    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:02.461364    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:02.873545    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:02.960871    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:03.374851    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:03.475456    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:03.873404    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:03.962600    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:04.373351    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:04.461110    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:04.873946    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:04.961288    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:05.374657    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:05.476447    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:05.874107    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:05.961467    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:06.373438    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:06.462149    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:06.875072    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:06.962467    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:07.373999    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:07.461466    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 16:46:07.874026    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:07.961910    8295 kapi.go:107] duration metric: took 55.005467161s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 16:46:08.373044    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:08.874628    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:09.373478    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:09.873365    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:10.373860    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:10.877999    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:11.373523    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:11.873414    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:12.373994    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:12.872954    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:13.374255    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:13.874105    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:14.373554    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:14.876862    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:15.373199    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:15.874292    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:16.373036    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:16.874324    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:17.374080    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:17.873336    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:18.373850    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:18.881493    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:19.374181    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:19.874700    8295 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 16:46:20.374584    8295 kapi.go:107] duration metric: took 1m11.005728494s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 16:46:36.668181    8295 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 16:46:36.668208    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:37.167061    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:37.667493    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:38.166476    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:38.666666    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:39.166489    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:39.666121    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:40.166914    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:40.667528    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:41.166323    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:41.667228    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:42.168493    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:42.666430    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:43.166008    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:43.667385    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:44.166559    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:44.667744    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:45.168059    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:45.666206    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:46.166117    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:46.667247    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:47.167505    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:47.666693    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:48.167012    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:48.666477    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:49.166967    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:49.667397    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:50.167209    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:50.667379    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:51.167728    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:51.666172    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:52.167406    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:52.668077    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:53.166779    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:53.667012    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:54.167073    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:54.666765    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:55.166595    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:55.666186    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:56.166782    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:56.666814    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:57.168444    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:57.666500    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:58.166833    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:58.666844    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:59.167192    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:46:59.667347    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:00.173927    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:00.667865    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:01.166856    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:01.666375    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:02.166893    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:02.667637    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:03.166228    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:03.667506    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:04.166436    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:04.666425    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:05.167311    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:05.666888    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:06.166918    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:06.667954    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:07.166247    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:07.666509    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:08.166345    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:08.667563    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:09.166279    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:09.666754    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:10.166319    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:10.667632    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:11.166540    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:11.668452    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:12.167519    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:12.666868    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:13.166306    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:13.667359    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:14.167065    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:14.666977    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:15.166972    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:15.666660    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:16.166371    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:16.666684    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:17.167073    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:17.666671    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:18.166921    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:18.666613    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:19.166350    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:19.667243    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:20.166939    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:20.666952    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:21.167618    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:21.667043    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:22.166727    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:22.666157    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:23.167025    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:23.666345    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:24.166919    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:24.666699    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:25.166793    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:25.666429    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:26.167043    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:26.667083    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:27.167614    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:27.667293    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:28.166341    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:28.667964    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:29.166008    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:29.666472    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:30.168026    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:30.666310    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:31.167110    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:31.666177    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:32.166665    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:32.666535    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:33.166619    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:33.666340    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:34.167066    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:34.667146    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:35.167135    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:35.666744    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:36.166863    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:36.667045    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:37.167251    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:37.667065    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:38.166429    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:38.666771    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:39.166760    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:39.666218    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:40.166661    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:40.667593    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:41.166963    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:41.666166    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:42.170636    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:42.667108    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:43.167365    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:43.666920    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:44.166345    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:44.667525    8295 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 16:47:45.171214    8295 kapi.go:107] duration metric: took 2m30.508286466s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 16:47:45.174092    8295 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-522792 cluster.
	I0914 16:47:45.176740    8295 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 16:47:45.183308    8295 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 16:47:45.185744    8295 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner-rancher, volcano, storage-provisioner, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0914 16:47:45.188712    8295 addons.go:510] duration metric: took 2m46.417760523s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner-rancher volcano storage-provisioner metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0914 16:47:45.188781    8295 start.go:246] waiting for cluster config update ...
	I0914 16:47:45.188821    8295 start.go:255] writing updated cluster config ...
	I0914 16:47:45.189220    8295 ssh_runner.go:195] Run: rm -f paused
	I0914 16:47:45.574370    8295 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 16:47:45.576799    8295 out.go:177] * Done! kubectl is now configured to use "addons-522792" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 14 16:57:18 addons-522792 dockerd[1284]: time="2024-09-14T16:57:18.834606203Z" level=info msg="ignoring event" container=7ab926b2883ca15ade7a2a600327fe117c9c9e1080c39aba93f6934c87aa6902 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:19 addons-522792 dockerd[1284]: time="2024-09-14T16:57:19.018519713Z" level=info msg="ignoring event" container=3ebc2b07b29453e202df90c2e7d41abc0e8630e1c6510a44870dc87105a869e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:19 addons-522792 dockerd[1284]: time="2024-09-14T16:57:19.067674356Z" level=info msg="ignoring event" container=6672f9af4beb9dbb2190abe556ccc775d3488dcc776cac4f2184c0e12b1cd430 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:19 addons-522792 dockerd[1284]: time="2024-09-14T16:57:19.596273735Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 14 16:57:19 addons-522792 dockerd[1284]: time="2024-09-14T16:57:19.598571064Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 14 16:57:24 addons-522792 dockerd[1284]: time="2024-09-14T16:57:24.426736361Z" level=info msg="ignoring event" container=4d17eac7959d9ae4b58b10629e0bfc39a1afe0282a62f8cf8662c3da4769f9ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:24 addons-522792 dockerd[1284]: time="2024-09-14T16:57:24.643329081Z" level=info msg="ignoring event" container=d958ec8d95ba3b8a4c30fdee544f92af8aa3c1f5dbd2d0392a4d1f12bc039e29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:25 addons-522792 cri-dockerd[1542]: time="2024-09-14T16:57:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/728af1c3e102f2e5ad078ab0f5af01b48c33e30b9cee8128424a552580929b02/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 14 16:57:25 addons-522792 dockerd[1284]: time="2024-09-14T16:57:25.466835338Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 14 16:57:25 addons-522792 cri-dockerd[1542]: time="2024-09-14T16:57:25Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 14 16:57:26 addons-522792 dockerd[1284]: time="2024-09-14T16:57:26.128328641Z" level=info msg="ignoring event" container=5097c1caabff44f2395e605798c4cd7516f7d4fbc2821812e84010f30d671d4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:27 addons-522792 dockerd[1284]: time="2024-09-14T16:57:27.490111465Z" level=info msg="ignoring event" container=728af1c3e102f2e5ad078ab0f5af01b48c33e30b9cee8128424a552580929b02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:29 addons-522792 cri-dockerd[1542]: time="2024-09-14T16:57:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/915bd758744344e7a051debd0bad228583c28832904638c9719c1612a00ddabe/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 14 16:57:29 addons-522792 cri-dockerd[1542]: time="2024-09-14T16:57:29Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 14 16:57:30 addons-522792 dockerd[1284]: time="2024-09-14T16:57:30.228651719Z" level=info msg="ignoring event" container=16b338ab1a8fc6da04dd41bb8d902d4bb940753351d99777f8184b154097976b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:31 addons-522792 dockerd[1284]: time="2024-09-14T16:57:31.570970258Z" level=info msg="ignoring event" container=915bd758744344e7a051debd0bad228583c28832904638c9719c1612a00ddabe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:33 addons-522792 cri-dockerd[1542]: time="2024-09-14T16:57:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b56a2ab00960243fab703f0d812246fd4449f02c695d47da794279985e15afe8/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 14 16:57:33 addons-522792 dockerd[1284]: time="2024-09-14T16:57:33.270943436Z" level=info msg="ignoring event" container=c722bd5b9f674cb63bc420fc82cdc3e98a5c1a71988cc383bb476e6066a253c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:34 addons-522792 dockerd[1284]: time="2024-09-14T16:57:34.628850153Z" level=info msg="ignoring event" container=b56a2ab00960243fab703f0d812246fd4449f02c695d47da794279985e15afe8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:40 addons-522792 dockerd[1284]: time="2024-09-14T16:57:40.809126617Z" level=info msg="ignoring event" container=53111fd71beef270b29149a30da9680a53c027f7d9ecfd85479aa6349718ec1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:41 addons-522792 dockerd[1284]: time="2024-09-14T16:57:41.454069547Z" level=info msg="ignoring event" container=b3d71d7f189b3a87445b058f7e5053b67544603c53735e7e7c821190e3e799c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:41 addons-522792 dockerd[1284]: time="2024-09-14T16:57:41.558999836Z" level=info msg="ignoring event" container=1a0590ce53343ee8d6a3111e046da63f551bb75d16cff089864482c296df9521 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:41 addons-522792 dockerd[1284]: time="2024-09-14T16:57:41.697895631Z" level=info msg="ignoring event" container=ac3458f68ff5d0e406c3b0be519875b101ff9792d388a1c614ff010b64bde3e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 14 16:57:41 addons-522792 cri-dockerd[1542]: time="2024-09-14T16:57:41Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-gc6wz_kube-system\": unexpected command output nsenter: cannot open /proc/3650/ns/net: No such file or directory\n with error: exit status 1"
	Sep 14 16:57:41 addons-522792 dockerd[1284]: time="2024-09-14T16:57:41.831386265Z" level=info msg="ignoring event" container=e8ed6e6a39de23c18a4d12c9056936c36df01195f8cd39c2b4499dbf2b4481d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c722bd5b9f674       fc9db2894f4e4                                                                                                                9 seconds ago       Exited              helper-pod                0                   b56a2ab009602       helper-pod-delete-pvc-23df3344-fb76-4fb6-94e8-b80679b102a4
	e784491ae9de0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            38 seconds ago      Exited              gadget                    7                   79d6eaa16fcd1       gadget-t9pw2
	31d5f8ab109c0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   287beb19bbebf       gcp-auth-89d5ffd79-ttqps
	02f4a841f119b       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   b778765966f0f       ingress-nginx-controller-bc57996ff-tv6km
	0e5b5570e12f4       420193b27261a                                                                                                                11 minutes ago      Exited              patch                     1                   27b1d9ad448f3       ingress-nginx-admission-patch-cd4xv
	6f1df0347f422       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   edb8d0857eb12       ingress-nginx-admission-create-7trzr
	5acc4da94b5c3       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server            0                   48008937b4a66       metrics-server-84c5f94fbc-9n99x
	61b49b45de57f       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   55095d1f4a0c9       local-path-provisioner-86d989889c-qks4k
	1a0590ce53343       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   e8ed6e6a39de2       registry-proxy-gc6wz
	de6e0b064f9fa       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   e916353431579       kube-ingress-dns-minikube
	3694f81d671b6       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator    0                   d9470cb741c6f       cloud-spanner-emulator-769b77f747-xmnh4
	3851d8f2fcf6a       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       0                   4c1cce941ada3       storage-provisioner
	8a6bc27224541       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                   0                   333f61acb4dd1       coredns-7c65d6cfc9-2qzgj
	63a58c65c9612       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                0                   36649605ff888       kube-proxy-m67s8
	fdb4207b62236       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler            0                   4db9f38257a4b       kube-scheduler-addons-522792
	17a474cc64842       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                      0                   027e89aa28eb6       etcd-addons-522792
	943eae91bc360       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver            0                   60aa624d6c237       kube-apiserver-addons-522792
	7851162366ed1       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   6a8a1e53bcc1b       kube-controller-manager-addons-522792
	
	
	==> controller_ingress [02f4a841f119] <==
	W0914 16:46:19.467545       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0914 16:46:19.467698       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0914 16:46:19.478737       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0914 16:46:19.830039       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0914 16:46:19.846760       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0914 16:46:19.857897       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0914 16:46:19.881588       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"9d94f7a4-2076-40dc-9523-d52793095886", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0914 16:46:19.891670       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"2801f54a-1ec8-43a8-a52b-ebb49f519c88", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0914 16:46:19.892041       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"2930423f-c94d-48f0-91ed-845f256cc89e", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0914 16:46:21.059478       7 nginx.go:317] "Starting NGINX process"
	I0914 16:46:21.059778       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0914 16:46:21.060056       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0914 16:46:21.060331       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0914 16:46:21.078904       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0914 16:46:21.079278       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-tv6km"
	I0914 16:46:21.090134       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-tv6km" node="addons-522792"
	I0914 16:46:21.101063       7 controller.go:213] "Backend successfully reloaded"
	I0914 16:46:21.101143       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0914 16:46:21.101248       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-tv6km", UID:"2ea60157-3e10-43dc-934f-52e60d08020b", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [8a6bc2722454] <==
	[INFO] 10.244.0.6:35710 - 12945 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094769s
	[INFO] 10.244.0.6:40396 - 59558 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002397725s
	[INFO] 10.244.0.6:40396 - 13220 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002261618s
	[INFO] 10.244.0.6:41701 - 2509 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000172653s
	[INFO] 10.244.0.6:41701 - 56264 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122405s
	[INFO] 10.244.0.6:35644 - 11530 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158573s
	[INFO] 10.244.0.6:35644 - 49423 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000060817s
	[INFO] 10.244.0.6:40443 - 61864 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117687s
	[INFO] 10.244.0.6:40443 - 27567 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068776s
	[INFO] 10.244.0.6:60814 - 34362 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062441s
	[INFO] 10.244.0.6:60814 - 15417 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050388s
	[INFO] 10.244.0.6:36052 - 45329 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002971242s
	[INFO] 10.244.0.6:36052 - 48147 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.008484785s
	[INFO] 10.244.0.6:41842 - 56965 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000114733s
	[INFO] 10.244.0.6:41842 - 43654 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000954607s
	[INFO] 10.244.0.25:33256 - 7462 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000432364s
	[INFO] 10.244.0.25:43619 - 24358 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000448741s
	[INFO] 10.244.0.25:45168 - 10871 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119106s
	[INFO] 10.244.0.25:33935 - 3239 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000088632s
	[INFO] 10.244.0.25:59066 - 37034 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000192584s
	[INFO] 10.244.0.25:37915 - 7961 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120058s
	[INFO] 10.244.0.25:59875 - 27165 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002113171s
	[INFO] 10.244.0.25:39139 - 23852 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002385091s
	[INFO] 10.244.0.25:50261 - 40078 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002051172s
	[INFO] 10.244.0.25:41950 - 3282 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002223137s
	
	
	==> describe nodes <==
	Name:               addons-522792
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-522792
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=addons-522792
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T16_44_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-522792
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 16:44:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-522792
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 16:57:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 16:53:34 +0000   Sat, 14 Sep 2024 16:44:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 16:53:34 +0000   Sat, 14 Sep 2024 16:44:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 16:53:34 +0000   Sat, 14 Sep 2024 16:44:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 16:53:34 +0000   Sat, 14 Sep 2024 16:44:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-522792
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 34b1fc558e68472fadfff36e0ead20fb
	  System UUID:                51e9b3f7-0de7-4ee6-8ab8-07bde40bf060
	  Boot ID:                    e1d7fe27-1b83-4ff7-b719-431fa2f274d6
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-769b77f747-xmnh4     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-t9pw2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-ttqps                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-tv6km    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-2qzgj                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-522792                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-522792                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-522792       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-m67s8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-522792                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-9n99x             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-qks4k     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet          Node addons-522792 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x4 over 12m)  kubelet          Node addons-522792 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x4 over 12m)  kubelet          Node addons-522792 status is now: NodeHasSufficientMemory
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-522792 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-522792 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-522792 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-522792 event: Registered Node addons-522792 in Controller
	
	
	==> dmesg <==
	[Sep14 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014659] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.465098] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.740643] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.075028] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [17a474cc6484] <==
	{"level":"info","ts":"2024-09-14T16:44:48.443762Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-14T16:44:48.443860Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-14T16:44:49.079184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-14T16:44:49.079448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-14T16:44:49.079551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-14T16:44:49.079681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T16:44:49.079780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T16:44:49.079902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-14T16:44:49.080016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T16:44:49.083364Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-522792 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T16:44:49.083571Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T16:44:49.083652Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:44:49.087274Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T16:44:49.087365Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T16:44:49.083671Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T16:44:49.087975Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T16:44:49.088105Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T16:44:49.089112Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T16:44:49.089408Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-14T16:44:49.089662Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:44:49.089854Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:44:49.089962Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T16:54:49.175732Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1863}
	{"level":"info","ts":"2024-09-14T16:54:49.232271Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1863,"took":"55.864325ms","hash":1296306,"current-db-size-bytes":8617984,"current-db-size":"8.6 MB","current-db-size-in-use-bytes":4935680,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-14T16:54:49.232330Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1296306,"revision":1863,"compact-revision":-1}
	
	
	==> gcp-auth [31d5f8ab109c] <==
	2024/09/14 16:47:44 GCP Auth Webhook started!
	2024/09/14 16:48:02 Ready to marshal response ...
	2024/09/14 16:48:02 Ready to write response ...
	2024/09/14 16:48:02 Ready to marshal response ...
	2024/09/14 16:48:02 Ready to write response ...
	2024/09/14 16:48:25 Ready to marshal response ...
	2024/09/14 16:48:25 Ready to write response ...
	2024/09/14 16:48:26 Ready to marshal response ...
	2024/09/14 16:48:26 Ready to write response ...
	2024/09/14 16:48:26 Ready to marshal response ...
	2024/09/14 16:48:26 Ready to write response ...
	2024/09/14 16:56:40 Ready to marshal response ...
	2024/09/14 16:56:40 Ready to write response ...
	2024/09/14 16:56:45 Ready to marshal response ...
	2024/09/14 16:56:45 Ready to write response ...
	2024/09/14 16:57:02 Ready to marshal response ...
	2024/09/14 16:57:02 Ready to write response ...
	2024/09/14 16:57:24 Ready to marshal response ...
	2024/09/14 16:57:24 Ready to write response ...
	2024/09/14 16:57:24 Ready to marshal response ...
	2024/09/14 16:57:24 Ready to write response ...
	2024/09/14 16:57:32 Ready to marshal response ...
	2024/09/14 16:57:32 Ready to write response ...
	
	
	==> kernel <==
	 16:57:43 up 40 min,  0 users,  load average: 1.01, 0.64, 0.56
	Linux addons-522792 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [943eae91bc36] <==
	I0914 16:48:16.761309       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0914 16:48:17.018855       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0914 16:48:17.086950       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0914 16:48:17.310488       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0914 16:48:17.330045       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0914 16:48:17.761706       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0914 16:48:17.761726       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0914 16:48:17.761795       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0914 16:48:17.831712       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0914 16:48:18.311278       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0914 16:48:18.415180       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0914 16:56:52.850236       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0914 16:57:18.598844       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:57:18.602596       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:57:18.626890       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:57:18.626938       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:57:18.643814       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:57:18.643863       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:57:18.673943       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:57:18.675278       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0914 16:57:18.692136       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0914 16:57:18.692184       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0914 16:57:19.628673       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0914 16:57:19.692553       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0914 16:57:19.719748       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [7851162366ed] <==
	E0914 16:57:22.925275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:23.322244       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:23.322281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:26.267527       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:26.267572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:26.763679       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:26.763924       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:57:28.991618       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0914 16:57:28.991657       1 shared_informer.go:320] Caches are synced for resource quota
	I0914 16:57:29.232170       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0914 16:57:29.232217       1 shared_informer.go:320] Caches are synced for garbage collector
	W0914 16:57:29.558596       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:29.558638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:31.929546       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:31.929588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:32.611191       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:32.611242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:33.097694       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:33.097739       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:57:33.127880       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="7.13µs"
	W0914 16:57:35.084667       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:35.084722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0914 16:57:38.691053       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0914 16:57:38.691101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0914 16:57:41.391803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.578µs"
	
	
	==> kube-proxy [63a58c65c961] <==
	I0914 16:45:00.834275       1 server_linux.go:66] "Using iptables proxy"
	I0914 16:45:00.942960       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0914 16:45:00.943045       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 16:45:00.997978       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 16:45:00.998043       1 server_linux.go:169] "Using iptables Proxier"
	I0914 16:45:01.000726       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 16:45:01.001502       1 server.go:483] "Version info" version="v1.31.1"
	I0914 16:45:01.001524       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 16:45:01.008183       1 config.go:199] "Starting service config controller"
	I0914 16:45:01.008226       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 16:45:01.008251       1 config.go:105] "Starting endpoint slice config controller"
	I0914 16:45:01.008256       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 16:45:01.012249       1 config.go:328] "Starting node config controller"
	I0914 16:45:01.012285       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 16:45:01.109318       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 16:45:01.109332       1 shared_informer.go:320] Caches are synced for service config
	I0914 16:45:01.112476       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [fdb4207b6223] <==
	E0914 16:44:51.589659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:51.589716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 16:44:51.589730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0914 16:44:51.589059       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0914 16:44:51.588859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 16:44:51.590120       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:52.423263       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 16:44:52.423511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:52.438522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 16:44:52.438788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:52.597677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 16:44:52.599474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:52.610331       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 16:44:52.610381       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:52.610707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 16:44:52.610738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:52.624117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 16:44:52.624177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:52.626555       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 16:44:52.626817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:52.670630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 16:44:52.670883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 16:44:52.953545       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 16:44:52.953827       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0914 16:44:54.971697       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 16:57:34 addons-522792 kubelet[2354]: I0914 16:57:34.776731    2354 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bc9005e-349c-42b1-a0ca-31e7710d0eb4-data" (OuterVolumeSpecName: "data") pod "9bc9005e-349c-42b1-a0ca-31e7710d0eb4" (UID: "9bc9005e-349c-42b1-a0ca-31e7710d0eb4"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 14 16:57:34 addons-522792 kubelet[2354]: I0914 16:57:34.776931    2354 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/9bc9005e-349c-42b1-a0ca-31e7710d0eb4-data\") on node \"addons-522792\" DevicePath \"\""
	Sep 14 16:57:34 addons-522792 kubelet[2354]: I0914 16:57:34.777037    2354 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9bc9005e-349c-42b1-a0ca-31e7710d0eb4-gcp-creds\") on node \"addons-522792\" DevicePath \"\""
	Sep 14 16:57:34 addons-522792 kubelet[2354]: I0914 16:57:34.777102    2354 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/9bc9005e-349c-42b1-a0ca-31e7710d0eb4-script\") on node \"addons-522792\" DevicePath \"\""
	Sep 14 16:57:34 addons-522792 kubelet[2354]: I0914 16:57:34.780911    2354 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc9005e-349c-42b1-a0ca-31e7710d0eb4-kube-api-access-bnfhv" (OuterVolumeSpecName: "kube-api-access-bnfhv") pod "9bc9005e-349c-42b1-a0ca-31e7710d0eb4" (UID: "9bc9005e-349c-42b1-a0ca-31e7710d0eb4"). InnerVolumeSpecName "kube-api-access-bnfhv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:57:34 addons-522792 kubelet[2354]: I0914 16:57:34.877843    2354 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bnfhv\" (UniqueName: \"kubernetes.io/projected/9bc9005e-349c-42b1-a0ca-31e7710d0eb4-kube-api-access-bnfhv\") on node \"addons-522792\" DevicePath \"\""
	Sep 14 16:57:35 addons-522792 kubelet[2354]: E0914 16:57:35.335031    2354 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="a3eb78f8-7a5d-4aea-8aae-d8c8781cc4e5"
	Sep 14 16:57:35 addons-522792 kubelet[2354]: I0914 16:57:35.564119    2354 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b56a2ab00960243fab703f0d812246fd4449f02c695d47da794279985e15afe8"
	Sep 14 16:57:38 addons-522792 kubelet[2354]: I0914 16:57:38.343781    2354 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc9005e-349c-42b1-a0ca-31e7710d0eb4" path="/var/lib/kubelet/pods/9bc9005e-349c-42b1-a0ca-31e7710d0eb4/volumes"
	Sep 14 16:57:40 addons-522792 kubelet[2354]: I0914 16:57:40.926168    2354 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9e2b570f-dd67-403f-93dc-99a7168d1fc0-gcp-creds\") pod \"9e2b570f-dd67-403f-93dc-99a7168d1fc0\" (UID: \"9e2b570f-dd67-403f-93dc-99a7168d1fc0\") "
	Sep 14 16:57:40 addons-522792 kubelet[2354]: I0914 16:57:40.926723    2354 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cx2n\" (UniqueName: \"kubernetes.io/projected/9e2b570f-dd67-403f-93dc-99a7168d1fc0-kube-api-access-6cx2n\") pod \"9e2b570f-dd67-403f-93dc-99a7168d1fc0\" (UID: \"9e2b570f-dd67-403f-93dc-99a7168d1fc0\") "
	Sep 14 16:57:40 addons-522792 kubelet[2354]: I0914 16:57:40.926663    2354 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2b570f-dd67-403f-93dc-99a7168d1fc0-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "9e2b570f-dd67-403f-93dc-99a7168d1fc0" (UID: "9e2b570f-dd67-403f-93dc-99a7168d1fc0"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 14 16:57:40 addons-522792 kubelet[2354]: I0914 16:57:40.932689    2354 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e2b570f-dd67-403f-93dc-99a7168d1fc0-kube-api-access-6cx2n" (OuterVolumeSpecName: "kube-api-access-6cx2n") pod "9e2b570f-dd67-403f-93dc-99a7168d1fc0" (UID: "9e2b570f-dd67-403f-93dc-99a7168d1fc0"). InnerVolumeSpecName "kube-api-access-6cx2n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:57:41 addons-522792 kubelet[2354]: I0914 16:57:41.027744    2354 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9e2b570f-dd67-403f-93dc-99a7168d1fc0-gcp-creds\") on node \"addons-522792\" DevicePath \"\""
	Sep 14 16:57:41 addons-522792 kubelet[2354]: I0914 16:57:41.027784    2354 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6cx2n\" (UniqueName: \"kubernetes.io/projected/9e2b570f-dd67-403f-93dc-99a7168d1fc0-kube-api-access-6cx2n\") on node \"addons-522792\" DevicePath \"\""
	Sep 14 16:57:41 addons-522792 kubelet[2354]: I0914 16:57:41.799407    2354 scope.go:117] "RemoveContainer" containerID="b3d71d7f189b3a87445b058f7e5053b67544603c53735e7e7c821190e3e799c3"
	Sep 14 16:57:41 addons-522792 kubelet[2354]: I0914 16:57:41.841399    2354 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx7hr\" (UniqueName: \"kubernetes.io/projected/1dfaea65-f8b7-4b16-a20d-1537cb255324-kube-api-access-bx7hr\") pod \"1dfaea65-f8b7-4b16-a20d-1537cb255324\" (UID: \"1dfaea65-f8b7-4b16-a20d-1537cb255324\") "
	Sep 14 16:57:41 addons-522792 kubelet[2354]: I0914 16:57:41.855793    2354 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfaea65-f8b7-4b16-a20d-1537cb255324-kube-api-access-bx7hr" (OuterVolumeSpecName: "kube-api-access-bx7hr") pod "1dfaea65-f8b7-4b16-a20d-1537cb255324" (UID: "1dfaea65-f8b7-4b16-a20d-1537cb255324"). InnerVolumeSpecName "kube-api-access-bx7hr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:57:41 addons-522792 kubelet[2354]: I0914 16:57:41.945090    2354 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxs6s\" (UniqueName: \"kubernetes.io/projected/fc4eccaf-180b-499a-bd4f-df2cba03caa3-kube-api-access-qxs6s\") pod \"fc4eccaf-180b-499a-bd4f-df2cba03caa3\" (UID: \"fc4eccaf-180b-499a-bd4f-df2cba03caa3\") "
	Sep 14 16:57:41 addons-522792 kubelet[2354]: I0914 16:57:41.945900    2354 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bx7hr\" (UniqueName: \"kubernetes.io/projected/1dfaea65-f8b7-4b16-a20d-1537cb255324-kube-api-access-bx7hr\") on node \"addons-522792\" DevicePath \"\""
	Sep 14 16:57:41 addons-522792 kubelet[2354]: I0914 16:57:41.954657    2354 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc4eccaf-180b-499a-bd4f-df2cba03caa3-kube-api-access-qxs6s" (OuterVolumeSpecName: "kube-api-access-qxs6s") pod "fc4eccaf-180b-499a-bd4f-df2cba03caa3" (UID: "fc4eccaf-180b-499a-bd4f-df2cba03caa3"). InnerVolumeSpecName "kube-api-access-qxs6s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 14 16:57:42 addons-522792 kubelet[2354]: I0914 16:57:42.046830    2354 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qxs6s\" (UniqueName: \"kubernetes.io/projected/fc4eccaf-180b-499a-bd4f-df2cba03caa3-kube-api-access-qxs6s\") on node \"addons-522792\" DevicePath \"\""
	Sep 14 16:57:42 addons-522792 kubelet[2354]: I0914 16:57:42.348117    2354 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfaea65-f8b7-4b16-a20d-1537cb255324" path="/var/lib/kubelet/pods/1dfaea65-f8b7-4b16-a20d-1537cb255324/volumes"
	Sep 14 16:57:42 addons-522792 kubelet[2354]: I0914 16:57:42.348518    2354 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e2b570f-dd67-403f-93dc-99a7168d1fc0" path="/var/lib/kubelet/pods/9e2b570f-dd67-403f-93dc-99a7168d1fc0/volumes"
	Sep 14 16:57:42 addons-522792 kubelet[2354]: I0914 16:57:42.836407    2354 scope.go:117] "RemoveContainer" containerID="1a0590ce53343ee8d6a3111e046da63f551bb75d16cff089864482c296df9521"
	
	
	==> storage-provisioner [3851d8f2fcf6] <==
	I0914 16:45:07.566868       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 16:45:07.585132       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 16:45:07.585183       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 16:45:07.595776       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 16:45:07.596241       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d944add7-be74-4b45-8805-68f8b496faa4", APIVersion:"v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-522792_5245731b-7307-4706-a3da-983835831cdd became leader
	I0914 16:45:07.596332       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-522792_5245731b-7307-4706-a3da-983835831cdd!
	I0914 16:45:07.696773       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-522792_5245731b-7307-4706-a3da-983835831cdd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-522792 -n addons-522792
helpers_test.go:261: (dbg) Run:  kubectl --context addons-522792 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-7trzr ingress-nginx-admission-patch-cd4xv
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-522792 describe pod busybox ingress-nginx-admission-create-7trzr ingress-nginx-admission-patch-cd4xv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-522792 describe pod busybox ingress-nginx-admission-create-7trzr ingress-nginx-admission-patch-cd4xv: exit status 1 (103.18712ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-522792/192.168.49.2
	Start Time:       Sat, 14 Sep 2024 16:48:26 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w9xhd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w9xhd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-522792
	  Normal   Pulling    7m46s (x4 over 9m18s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m46s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m46s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m21s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m13s (x19 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7trzr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cd4xv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-522792 describe pod busybox ingress-nginx-admission-create-7trzr ingress-nginx-admission-patch-cd4xv: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.68s)

                                                
                                    

Test pass (318/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.07
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 6.35
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.58
22 TestOffline 89.65
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 223.83
29 TestAddons/serial/Volcano 40.31
31 TestAddons/serial/GCPAuth/Namespaces 0.19
34 TestAddons/parallel/Ingress 19.33
35 TestAddons/parallel/InspektorGadget 11.89
36 TestAddons/parallel/MetricsServer 5.7
39 TestAddons/parallel/CSI 37.6
40 TestAddons/parallel/Headlamp 16.08
41 TestAddons/parallel/CloudSpanner 5.52
42 TestAddons/parallel/LocalPath 51.68
43 TestAddons/parallel/NvidiaDevicePlugin 5.46
44 TestAddons/parallel/Yakd 11.8
45 TestAddons/StoppedEnableDisable 6.11
46 TestCertOptions 38.15
47 TestCertExpiration 253.2
48 TestDockerFlags 34.29
49 TestForceSystemdFlag 55.33
50 TestForceSystemdEnv 42.58
56 TestErrorSpam/setup 31.75
57 TestErrorSpam/start 0.74
58 TestErrorSpam/status 1.09
59 TestErrorSpam/pause 1.42
60 TestErrorSpam/unpause 1.56
61 TestErrorSpam/stop 10.99
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 76.02
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.7
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.45
73 TestFunctional/serial/CacheCmd/cache/add_local 1.05
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
78 TestFunctional/serial/CacheCmd/cache/delete 0.14
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 42.02
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.17
84 TestFunctional/serial/LogsFileCmd 1.18
85 TestFunctional/serial/InvalidService 4.86
87 TestFunctional/parallel/ConfigCmd 0.47
88 TestFunctional/parallel/DashboardCmd 13.21
89 TestFunctional/parallel/DryRun 0.45
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.02
95 TestFunctional/parallel/ServiceCmdConnect 10.7
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 25.82
99 TestFunctional/parallel/SSHCmd 0.69
100 TestFunctional/parallel/CpCmd 2.27
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.7
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
111 TestFunctional/parallel/License 0.27
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 1.07
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.54
121 TestFunctional/parallel/ImageCommands/Setup 0.78
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.37
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.2
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
132 TestFunctional/parallel/DockerEnv/bash 1.17
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/MountCmd/any-port 8.41
143 TestFunctional/parallel/MountCmd/specific-port 2.04
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.99
145 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
147 TestFunctional/parallel/ProfileCmd/profile_list 0.4
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
149 TestFunctional/parallel/ServiceCmd/List 1.44
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.41
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
152 TestFunctional/parallel/ServiceCmd/Format 0.42
153 TestFunctional/parallel/ServiceCmd/URL 0.54
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 123.72
161 TestMultiControlPlane/serial/DeployApp 61.49
162 TestMultiControlPlane/serial/PingHostFromPods 1.72
163 TestMultiControlPlane/serial/AddWorkerNode 24.67
164 TestMultiControlPlane/serial/NodeLabels 0.11
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
166 TestMultiControlPlane/serial/CopyFile 20.06
167 TestMultiControlPlane/serial/StopSecondaryNode 11.73
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
169 TestMultiControlPlane/serial/RestartSecondaryNode 65.85
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 186.23
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.76
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
174 TestMultiControlPlane/serial/StopCluster 33.47
175 TestMultiControlPlane/serial/RestartCluster 87.5
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
177 TestMultiControlPlane/serial/AddSecondaryNode 42.53
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.81
181 TestImageBuild/serial/Setup 32.79
182 TestImageBuild/serial/NormalBuild 1.77
183 TestImageBuild/serial/BuildWithBuildArg 0.97
184 TestImageBuild/serial/BuildWithDockerIgnore 0.92
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.79
189 TestJSONOutput/start/Command 79.28
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.62
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.58
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.75
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.22
214 TestKicCustomNetwork/create_custom_network 33.33
215 TestKicCustomNetwork/use_default_bridge_network 36.77
216 TestKicExistingNetwork 34.23
217 TestKicCustomSubnet 34.19
218 TestKicStaticIP 31.87
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 75.73
223 TestMountStart/serial/StartWithMountFirst 8.8
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 8.44
226 TestMountStart/serial/VerifyMountSecond 0.28
227 TestMountStart/serial/DeleteFirst 1.48
228 TestMountStart/serial/VerifyMountPostDelete 0.27
229 TestMountStart/serial/Stop 1.22
230 TestMountStart/serial/RestartStopped 8.43
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 62.37
235 TestMultiNode/serial/DeployApp2Nodes 37.45
236 TestMultiNode/serial/PingHostFrom2Pods 1.05
237 TestMultiNode/serial/AddNode 17.61
238 TestMultiNode/serial/MultiNodeLabels 0.11
239 TestMultiNode/serial/ProfileList 0.36
240 TestMultiNode/serial/CopyFile 10.28
241 TestMultiNode/serial/StopNode 2.32
242 TestMultiNode/serial/StartAfterStop 11.37
243 TestMultiNode/serial/RestartKeepsNodes 103.43
244 TestMultiNode/serial/DeleteNode 5.69
245 TestMultiNode/serial/StopMultiNode 21.63
246 TestMultiNode/serial/RestartMultiNode 50.33
247 TestMultiNode/serial/ValidateNameConflict 34.29
252 TestPreload 102.08
254 TestScheduledStopUnix 105.98
255 TestSkaffold 116.2
257 TestInsufficientStorage 11.05
258 TestRunningBinaryUpgrade 89.66
260 TestKubernetesUpgrade 388.87
261 TestMissingContainerUpgrade 169.71
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
264 TestNoKubernetes/serial/StartWithK8s 45.07
265 TestNoKubernetes/serial/StartWithStopK8s 18.9
266 TestNoKubernetes/serial/Start 7.77
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
268 TestNoKubernetes/serial/ProfileList 1
269 TestNoKubernetes/serial/Stop 1.22
270 TestNoKubernetes/serial/StartNoArgs 7.29
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
272 TestStoppedBinaryUpgrade/Setup 0.82
273 TestStoppedBinaryUpgrade/Upgrade 93.52
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.42
294 TestPause/serial/Start 74.9
295 TestPause/serial/SecondStartNoReconfiguration 31.77
296 TestPause/serial/Pause 0.8
297 TestPause/serial/VerifyStatus 0.39
298 TestPause/serial/Unpause 0.7
299 TestPause/serial/PauseAgain 1.03
300 TestPause/serial/DeletePaused 2.5
301 TestPause/serial/VerifyDeletedResources 5.31
302 TestNetworkPlugins/group/auto/Start 45.8
303 TestNetworkPlugins/group/auto/KubeletFlags 0.31
304 TestNetworkPlugins/group/auto/NetCatPod 9.33
305 TestNetworkPlugins/group/auto/DNS 0.25
306 TestNetworkPlugins/group/auto/Localhost 0.16
307 TestNetworkPlugins/group/auto/HairPin 0.16
308 TestNetworkPlugins/group/kindnet/Start 56.47
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
311 TestNetworkPlugins/group/kindnet/NetCatPod 10.32
312 TestNetworkPlugins/group/kindnet/DNS 0.19
313 TestNetworkPlugins/group/kindnet/Localhost 0.18
314 TestNetworkPlugins/group/kindnet/HairPin 0.16
315 TestNetworkPlugins/group/calico/Start 87.78
316 TestNetworkPlugins/group/custom-flannel/Start 62.42
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.39
319 TestNetworkPlugins/group/custom-flannel/DNS 0.24
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
322 TestNetworkPlugins/group/calico/ControllerPod 6.01
323 TestNetworkPlugins/group/calico/KubeletFlags 0.37
324 TestNetworkPlugins/group/calico/NetCatPod 12.38
325 TestNetworkPlugins/group/false/Start 48.97
326 TestNetworkPlugins/group/calico/DNS 0.21
327 TestNetworkPlugins/group/calico/Localhost 0.3
328 TestNetworkPlugins/group/calico/HairPin 0.22
329 TestNetworkPlugins/group/enable-default-cni/Start 78.89
330 TestNetworkPlugins/group/false/KubeletFlags 0.37
331 TestNetworkPlugins/group/false/NetCatPod 11.34
332 TestNetworkPlugins/group/false/DNS 0.32
333 TestNetworkPlugins/group/false/Localhost 0.28
334 TestNetworkPlugins/group/false/HairPin 0.29
335 TestNetworkPlugins/group/flannel/Start 61.19
336 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
337 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.3
338 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
339 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
340 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/bridge/Start 83.56
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
344 TestNetworkPlugins/group/flannel/NetCatPod 14.35
345 TestNetworkPlugins/group/flannel/DNS 0.29
346 TestNetworkPlugins/group/flannel/Localhost 0.39
347 TestNetworkPlugins/group/flannel/HairPin 0.24
348 TestNetworkPlugins/group/kubenet/Start 75.27
349 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
350 TestNetworkPlugins/group/bridge/NetCatPod 10.28
351 TestNetworkPlugins/group/bridge/DNS 0.21
352 TestNetworkPlugins/group/bridge/Localhost 0.16
353 TestNetworkPlugins/group/bridge/HairPin 0.17
355 TestStartStop/group/old-k8s-version/serial/FirstStart 180.51
356 TestNetworkPlugins/group/kubenet/KubeletFlags 0.38
357 TestNetworkPlugins/group/kubenet/NetCatPod 13.38
358 TestNetworkPlugins/group/kubenet/DNS 0.23
359 TestNetworkPlugins/group/kubenet/Localhost 0.18
360 TestNetworkPlugins/group/kubenet/HairPin 0.17
362 TestStartStop/group/no-preload/serial/FirstStart 53.84
363 TestStartStop/group/no-preload/serial/DeployApp 9.37
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
365 TestStartStop/group/no-preload/serial/Stop 10.9
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
367 TestStartStop/group/no-preload/serial/SecondStart 330.5
368 TestStartStop/group/old-k8s-version/serial/DeployApp 9.63
369 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
370 TestStartStop/group/old-k8s-version/serial/Stop 11.11
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
372 TestStartStop/group/old-k8s-version/serial/SecondStart 126.56
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
376 TestStartStop/group/old-k8s-version/serial/Pause 2.87
378 TestStartStop/group/embed-certs/serial/FirstStart 68.16
379 TestStartStop/group/embed-certs/serial/DeployApp 10.41
380 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.43
381 TestStartStop/group/embed-certs/serial/Stop 11.15
382 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.3
383 TestStartStop/group/embed-certs/serial/SecondStart 266.57
384 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.14
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
387 TestStartStop/group/no-preload/serial/Pause 3.84
389 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 75.89
390 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.41
391 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
392 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.93
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
394 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.86
395 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
397 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
398 TestStartStop/group/embed-certs/serial/Pause 2.87
400 TestStartStop/group/newest-cni/serial/FirstStart 42.94
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
403 TestStartStop/group/newest-cni/serial/Stop 8.61
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
405 TestStartStop/group/newest-cni/serial/SecondStart 18.77
406 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
409 TestStartStop/group/newest-cni/serial/Pause 3.06
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.78
x
+
TestDownloadOnly/v1.20.0/json-events (13.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-472444 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-472444 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.073686658s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-472444
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-472444: exit status 85 (67.680101ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-472444 | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |          |
	|         | -p download-only-472444        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 16:43:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 16:43:39.667345    7543 out.go:345] Setting OutFile to fd 1 ...
	I0914 16:43:39.667495    7543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:43:39.667507    7543 out.go:358] Setting ErrFile to fd 2...
	I0914 16:43:39.667512    7543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:43:39.667774    7543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
	W0914 16:43:39.667905    7543 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19643-2222/.minikube/config/config.json: open /home/jenkins/minikube-integration/19643-2222/.minikube/config/config.json: no such file or directory
	I0914 16:43:39.668299    7543 out.go:352] Setting JSON to true
	I0914 16:43:39.669099    7543 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1568,"bootTime":1726330652,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0914 16:43:39.669168    7543 start.go:139] virtualization:  
	I0914 16:43:39.672256    7543 out.go:97] [download-only-472444] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0914 16:43:39.672414    7543 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19643-2222/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 16:43:39.672468    7543 notify.go:220] Checking for updates...
	I0914 16:43:39.674306    7543 out.go:169] MINIKUBE_LOCATION=19643
	I0914 16:43:39.677127    7543 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 16:43:39.679292    7543 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig
	I0914 16:43:39.681055    7543 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube
	I0914 16:43:39.682963    7543 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 16:43:39.686644    7543 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 16:43:39.686917    7543 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 16:43:39.708838    7543 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 16:43:39.708949    7543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 16:43:40.041039    7543 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 16:43:40.005355541 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 16:43:40.041174    7543 docker.go:318] overlay module found
	I0914 16:43:40.043122    7543 out.go:97] Using the docker driver based on user configuration
	I0914 16:43:40.043261    7543 start.go:297] selected driver: docker
	I0914 16:43:40.043276    7543 start.go:901] validating driver "docker" against <nil>
	I0914 16:43:40.043408    7543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 16:43:40.102606    7543 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 16:43:40.093031897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 16:43:40.102811    7543 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 16:43:40.103099    7543 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0914 16:43:40.103336    7543 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 16:43:40.105676    7543 out.go:169] Using Docker driver with root privileges
	I0914 16:43:40.107291    7543 cni.go:84] Creating CNI manager for ""
	I0914 16:43:40.107374    7543 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0914 16:43:40.107469    7543 start.go:340] cluster config:
	{Name:download-only-472444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-472444 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:43:40.109301    7543 out.go:97] Starting "download-only-472444" primary control-plane node in "download-only-472444" cluster
	I0914 16:43:40.109342    7543 cache.go:121] Beginning downloading kic base image for docker with docker
	I0914 16:43:40.111329    7543 out.go:97] Pulling base image v0.0.45-1726281268-19643 ...
	I0914 16:43:40.111373    7543 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 16:43:40.111526    7543 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local docker daemon
	I0914 16:43:40.128133    7543 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e to local cache
	I0914 16:43:40.128314    7543 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory
	I0914 16:43:40.128435    7543 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e to local cache
	I0914 16:43:40.178392    7543 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 16:43:40.178421    7543 cache.go:56] Caching tarball of preloaded images
	I0914 16:43:40.178591    7543 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 16:43:40.180637    7543 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0914 16:43:40.180668    7543 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 16:43:40.263589    7543 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19643-2222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0914 16:43:44.671682    7543 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 16:43:44.671788    7543 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19643-2222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0914 16:43:45.690632    7543 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0914 16:43:45.691044    7543 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/download-only-472444/config.json ...
	I0914 16:43:45.691121    7543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/download-only-472444/config.json: {Name:mk66cfce80b9ee59007e8612cf4ddeaa13d74494 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 16:43:45.691336    7543 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0914 16:43:45.691522    7543 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19643-2222/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-472444 host does not exist
	  To start a cluster, run: "minikube start -p download-only-472444"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-472444
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-650951 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-650951 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.350801513s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-650951
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-650951: exit status 85 (80.906751ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-472444 | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |                     |
	|         | -p download-only-472444        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| delete  | -p download-only-472444        | download-only-472444 | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC | 14 Sep 24 16:43 UTC |
	| start   | -o=json --download-only        | download-only-650951 | jenkins | v1.34.0 | 14 Sep 24 16:43 UTC |                     |
	|         | -p download-only-650951        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 16:43:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 16:43:53.163987    7744 out.go:345] Setting OutFile to fd 1 ...
	I0914 16:43:53.164214    7744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:43:53.164240    7744 out.go:358] Setting ErrFile to fd 2...
	I0914 16:43:53.164261    7744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 16:43:53.164545    7744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
	I0914 16:43:53.165001    7744 out.go:352] Setting JSON to true
	I0914 16:43:53.165798    7744 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1582,"bootTime":1726330652,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0914 16:43:53.165894    7744 start.go:139] virtualization:  
	I0914 16:43:53.168262    7744 out.go:97] [download-only-650951] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 16:43:53.168525    7744 notify.go:220] Checking for updates...
	I0914 16:43:53.170433    7744 out.go:169] MINIKUBE_LOCATION=19643
	I0914 16:43:53.172250    7744 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 16:43:53.173968    7744 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig
	I0914 16:43:53.175516    7744 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube
	I0914 16:43:53.177271    7744 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 16:43:53.180959    7744 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 16:43:53.181192    7744 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 16:43:53.201765    7744 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 16:43:53.201886    7744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 16:43:53.266233    7744 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 16:43:53.256662326 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 16:43:53.266355    7744 docker.go:318] overlay module found
	I0914 16:43:53.268228    7744 out.go:97] Using the docker driver based on user configuration
	I0914 16:43:53.268255    7744 start.go:297] selected driver: docker
	I0914 16:43:53.268262    7744 start.go:901] validating driver "docker" against <nil>
	I0914 16:43:53.268374    7744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 16:43:53.326148    7744 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 16:43:53.317083041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 16:43:53.326368    7744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 16:43:53.326661    7744 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0914 16:43:53.326838    7744 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 16:43:53.328935    7744 out.go:169] Using Docker driver with root privileges
	I0914 16:43:53.330594    7744 cni.go:84] Creating CNI manager for ""
	I0914 16:43:53.330679    7744 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0914 16:43:53.330703    7744 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0914 16:43:53.330803    7744 start.go:340] cluster config:
	{Name:download-only-650951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-650951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 16:43:53.332963    7744 out.go:97] Starting "download-only-650951" primary control-plane node in "download-only-650951" cluster
	I0914 16:43:53.332992    7744 cache.go:121] Beginning downloading kic base image for docker with docker
	I0914 16:43:53.334804    7744 out.go:97] Pulling base image v0.0.45-1726281268-19643 ...
	I0914 16:43:53.334840    7744 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 16:43:53.335019    7744 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local docker daemon
	I0914 16:43:53.350513    7744 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e to local cache
	I0914 16:43:53.350644    7744 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory
	I0914 16:43:53.350667    7744 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory, skipping pull
	I0914 16:43:53.350675    7744 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e exists in cache, skipping pull
	I0914 16:43:53.350683    7744 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e as a tarball
	I0914 16:43:53.390929    7744 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 16:43:53.390980    7744 cache.go:56] Caching tarball of preloaded images
	I0914 16:43:53.391165    7744 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0914 16:43:53.393180    7744 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0914 16:43:53.393222    7744 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0914 16:43:53.472492    7744 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19643-2222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0914 16:43:58.036987    7744 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0914 16:43:58.037107    7744 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19643-2222/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-650951 host does not exist
	  To start a cluster, run: "minikube start -p download-only-650951"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-650951
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-554567 --alsologtostderr --binary-mirror http://127.0.0.1:46851 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-554567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-554567
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (89.65s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-878776 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-878776 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m27.196241306s)
helpers_test.go:175: Cleaning up "offline-docker-878776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-878776
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-878776: (2.450355898s)
--- PASS: TestOffline (89.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-522792
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-522792: exit status 85 (60.801177ms)

                                                
                                                
-- stdout --
	* Profile "addons-522792" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-522792"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-522792
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-522792: exit status 85 (66.365432ms)

                                                
                                                
-- stdout --
	* Profile "addons-522792" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-522792"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (223.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-522792 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-522792 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m43.830984168s)
--- PASS: TestAddons/Setup (223.83s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 67.079391ms
addons_test.go:905: volcano-admission stabilized in 67.357415ms
addons_test.go:897: volcano-scheduler stabilized in 67.394035ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-8vlr9" [d4b16534-702b-4f12-8e6a-72c62a40a4b8] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003798275s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-7sv6t" [c48b79ef-13d7-477c-acaf-4a773416c412] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003609839s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-dmb4j" [1fd1aa2c-7890-4c03-b5c5-db640e8b56d9] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005773662s
addons_test.go:932: (dbg) Run:  kubectl --context addons-522792 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-522792 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-522792 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d24ba718-fb0c-489c-b78e-cf86291eec13] Pending
helpers_test.go:344: "test-job-nginx-0" [d24ba718-fb0c-489c-b78e-cf86291eec13] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d24ba718-fb0c-489c-b78e-cf86291eec13] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004760302s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-522792 addons disable volcano --alsologtostderr -v=1: (10.585299621s)
--- PASS: TestAddons/serial/Volcano (40.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-522792 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-522792 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-522792 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-522792 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-522792 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [81a89dae-07ab-40f0-b52f-242b1d516833] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [81a89dae-07ab-40f0-b52f-242b1d516833] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004404819s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-522792 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-522792 addons disable ingress --alsologtostderr -v=1: (7.791219516s)
--- PASS: TestAddons/parallel/Ingress (19.33s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-t9pw2" [1682f3fa-8b2f-4355-ace0-7180dcdbb012] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004236277s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-522792
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-522792: (5.885355657s)
--- PASS: TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.795004ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-9n99x" [ae800b4d-af12-4463-8d38-890543f21aef] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004574299s
addons_test.go:417: (dbg) Run:  kubectl --context addons-522792 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.863626ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-522792 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-522792 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cd6ccf53-e3a5-43fa-a585-49136e9f99ae] Pending
helpers_test.go:344: "task-pv-pod" [cd6ccf53-e3a5-43fa-a585-49136e9f99ae] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cd6ccf53-e3a5-43fa-a585-49136e9f99ae] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003948609s
addons_test.go:590: (dbg) Run:  kubectl --context addons-522792 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-522792 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-522792 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-522792 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-522792 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-522792 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-522792 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a77e2c9b-3858-4287-aa36-4c90ad3b3a71] Pending
helpers_test.go:344: "task-pv-pod-restore" [a77e2c9b-3858-4287-aa36-4c90ad3b3a71] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a77e2c9b-3858-4287-aa36-4c90ad3b3a71] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004865336s
addons_test.go:632: (dbg) Run:  kubectl --context addons-522792 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-522792 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-522792 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-522792 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.80087289s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (37.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-522792 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-r5l54" [9005f283-3925-45a4-8c02-250397c7b9ec] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-r5l54" [9005f283-3925-45a4-8c02-250397c7b9ec] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-r5l54" [9005f283-3925-45a4-8c02-250397c7b9ec] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004815386s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-522792 addons disable headlamp --alsologtostderr -v=1: (6.110598224s)
--- PASS: TestAddons/parallel/Headlamp (16.08s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-xmnh4" [13ee9824-802c-4889-8fe7-e0347b9c5ec1] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003813385s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-522792
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.68s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-522792 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-522792 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522792 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [98479660-64a4-4a4c-bc42-96d8b28d8e29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [98479660-64a4-4a4c-bc42-96d8b28d8e29] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [98479660-64a4-4a4c-bc42-96d8b28d8e29] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003684692s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-522792 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 ssh "cat /opt/local-path-provisioner/pvc-23df3344-fb76-4fb6-94e8-b80679b102a4_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-522792 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-522792 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-522792 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.510589831s)
--- PASS: TestAddons/parallel/LocalPath (51.68s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-57q46" [2c5bc5e1-fe9b-4b0a-9675-a3bba75998e6] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004452477s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-522792
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-7wrdh" [f494d3f2-8cec-497b-86c7-96371cb9ea08] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003683548s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-522792 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-522792 addons disable yakd --alsologtostderr -v=1: (5.794576582s)
--- PASS: TestAddons/parallel/Yakd (11.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.11s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-522792
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-522792: (5.855164903s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-522792
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-522792
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-522792
--- PASS: TestAddons/StoppedEnableDisable (6.11s)

                                                
                                    
x
+
TestCertOptions (38.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-476184 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-476184 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (35.262965843s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-476184 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-476184 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-476184 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-476184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-476184
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-476184: (2.192779596s)
--- PASS: TestCertOptions (38.15s)

                                                
                                    
x
+
TestCertExpiration (253.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-211227 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0914 17:42:42.061951    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:42:45.632319    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-211227 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (40.724310387s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-211227 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-211227 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (29.751688204s)
helpers_test.go:175: Cleaning up "cert-expiration-211227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-211227
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-211227: (2.721154348s)
--- PASS: TestCertExpiration (253.20s)

                                                
                                    
x
+
TestDockerFlags (34.29s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-974707 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0914 17:42:23.734912    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-974707 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.31981009s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-974707 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-974707 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-974707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-974707
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-974707: (2.243706469s)
--- PASS: TestDockerFlags (34.29s)

                                                
                                    
x
+
TestForceSystemdFlag (55.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-239458 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-239458 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (52.462700397s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-239458 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-239458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-239458
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-239458: (2.460174213s)
--- PASS: TestForceSystemdFlag (55.33s)

                                                
                                    
x
+
TestForceSystemdEnv (42.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-447265 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-447265 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.446216727s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-447265 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-447265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-447265
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-447265: (2.5260289s)
--- PASS: TestForceSystemdEnv (42.58s)

                                                
                                    
x
+
TestErrorSpam/setup (31.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-514928 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-514928 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-514928 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-514928 --driver=docker  --container-runtime=docker: (31.75293265s)
--- PASS: TestErrorSpam/setup (31.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (10.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 stop: (10.801746422s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-514928 --log_dir /tmp/nospam-514928 stop
--- PASS: TestErrorSpam/stop (10.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19643-2222/.minikube/files/etc/test/nested/copy/7537/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-895781 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-895781 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m16.01706378s)
--- PASS: TestFunctional/serial/StartWithProxy (76.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-895781 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-895781 --alsologtostderr -v=8: (36.695953438s)
functional_test.go:663: soft start took 36.698976567s for "functional-895781" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.70s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-895781 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-895781 cache add registry.k8s.io/pause:3.1: (1.193815931s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-895781 cache add registry.k8s.io/pause:3.3: (1.233506495s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-895781 cache add registry.k8s.io/pause:latest: (1.025210667s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-895781 /tmp/TestFunctionalserialCacheCmdcacheadd_local3581222658/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 cache add minikube-local-cache-test:functional-895781
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 cache delete minikube-local-cache-test:functional-895781
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-895781
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-895781 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (300.658764ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 kubectl -- --context functional-895781 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-895781 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-895781 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-895781 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.019993202s)
functional_test.go:761: restart took 42.02009964s for "functional-895781" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-895781 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-895781 logs: (1.169771546s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 logs --file /tmp/TestFunctionalserialLogsFileCmd3139221477/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-895781 logs --file /tmp/TestFunctionalserialLogsFileCmd3139221477/001/logs.txt: (1.174784016s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-895781 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-895781
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-895781: exit status 115 (600.270659ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31770 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-895781 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-895781 config get cpus: exit status 14 (55.832846ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-895781 config get cpus: exit status 14 (90.77235ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-895781 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-895781 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 51260: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-895781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-895781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (180.379373ms)

                                                
                                                
-- stdout --
	* [functional-895781] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:03:02.105995   50758 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:03:02.106208   50758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:03:02.106234   50758 out.go:358] Setting ErrFile to fd 2...
	I0914 17:03:02.106253   50758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:03:02.106531   50758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
	I0914 17:03:02.106936   50758 out.go:352] Setting JSON to false
	I0914 17:03:02.108018   50758 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2730,"bootTime":1726330652,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0914 17:03:02.108125   50758 start.go:139] virtualization:  
	I0914 17:03:02.111080   50758 out.go:177] * [functional-895781] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 17:03:02.113748   50758 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:03:02.113811   50758 notify.go:220] Checking for updates...
	I0914 17:03:02.118369   50758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:03:02.120758   50758 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig
	I0914 17:03:02.122763   50758 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube
	I0914 17:03:02.124800   50758 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 17:03:02.127027   50758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:03:02.129708   50758 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 17:03:02.130280   50758 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:03:02.151897   50758 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 17:03:02.152025   50758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:03:02.219943   50758 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 17:03:02.209478045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:03:02.220057   50758 docker.go:318] overlay module found
	I0914 17:03:02.222765   50758 out.go:177] * Using the docker driver based on existing profile
	I0914 17:03:02.224809   50758 start.go:297] selected driver: docker
	I0914 17:03:02.224830   50758 start.go:901] validating driver "docker" against &{Name:functional-895781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-895781 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:03:02.224931   50758 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:03:02.227681   50758 out.go:201] 
	W0914 17:03:02.229760   50758 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 17:03:02.231880   50758 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-895781 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-895781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-895781 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (200.34625ms)

                                                
                                                
-- stdout --
	* [functional-895781] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:03:03.574984   51073 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:03:03.575146   51073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:03:03.575177   51073 out.go:358] Setting ErrFile to fd 2...
	I0914 17:03:03.575184   51073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:03:03.577387   51073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
	I0914 17:03:03.577945   51073 out.go:352] Setting JSON to false
	I0914 17:03:03.579085   51073 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2732,"bootTime":1726330652,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0914 17:03:03.579203   51073 start.go:139] virtualization:  
	I0914 17:03:03.581716   51073 out.go:177] * [functional-895781] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0914 17:03:03.584345   51073 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:03:03.584505   51073 notify.go:220] Checking for updates...
	I0914 17:03:03.589257   51073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:03:03.591353   51073 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig
	I0914 17:03:03.593981   51073 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube
	I0914 17:03:03.596347   51073 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 17:03:03.598244   51073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:03:03.600986   51073 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 17:03:03.601642   51073 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:03:03.627921   51073 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 17:03:03.628046   51073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:03:03.692181   51073 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 17:03:03.682284761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:03:03.692302   51073 docker.go:318] overlay module found
	I0914 17:03:03.695239   51073 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0914 17:03:03.697692   51073 start.go:297] selected driver: docker
	I0914 17:03:03.697711   51073 start.go:901] validating driver "docker" against &{Name:functional-895781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-895781 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:03:03.697823   51073 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:03:03.702053   51073 out.go:201] 
	W0914 17:03:03.711316   51073 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 17:03:03.717566   51073 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-895781 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-895781 expose deployment hello-node-connect --type=NodePort --port=8080
E0914 17:02:46.930954    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-z2n8n" [5919f618-fcac-45a0-bb24-6549e98d0d67] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-z2n8n" [5919f618-fcac-45a0-bb24-6549e98d0d67] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004154436s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32670
functional_test.go:1675: http://192.168.49.2:32670: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-z2n8n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32670
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0cdaaf8a-4088-4bf7-a58a-1cb14343735e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003591137s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-895781 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-895781 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-895781 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-895781 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [00331957-3825-40c8-a613-b0046bb2a6b1] Pending
helpers_test.go:344: "sp-pod" [00331957-3825-40c8-a613-b0046bb2a6b1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [00331957-3825-40c8-a613-b0046bb2a6b1] Running
E0914 17:02:48.212408    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:02:50.773992    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003382876s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-895781 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-895781 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-895781 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3d26af1a-1a8d-4be1-9595-b3004db82315] Pending
helpers_test.go:344: "sp-pod" [3d26af1a-1a8d-4be1-9595-b3004db82315] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3d26af1a-1a8d-4be1-9595-b3004db82315] Running
E0914 17:02:55.895335    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004031797s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-895781 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh -n functional-895781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 cp functional-895781:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1154829378/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh -n functional-895781 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh -n functional-895781 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7537/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "sudo cat /etc/test/nested/copy/7537/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7537.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "sudo cat /etc/ssl/certs/7537.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7537.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "sudo cat /usr/share/ca-certificates/7537.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75372.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "sudo cat /etc/ssl/certs/75372.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75372.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "sudo cat /usr/share/ca-certificates/75372.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-895781 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-895781 ssh "sudo systemctl is-active crio": exit status 1 (434.132173ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-895781 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-895781 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-895781 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-895781 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45914: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-895781 version -o=json --components: (1.073044322s)
--- PASS: TestFunctional/parallel/Version/components (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-895781 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-895781
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-895781
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-895781 image ls --format short --alsologtostderr:
I0914 17:03:11.039616   51994 out.go:345] Setting OutFile to fd 1 ...
I0914 17:03:11.039789   51994 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:03:11.039814   51994 out.go:358] Setting ErrFile to fd 2...
I0914 17:03:11.039836   51994 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:03:11.040120   51994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
I0914 17:03:11.040853   51994 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 17:03:11.041287   51994 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 17:03:11.042442   51994 cli_runner.go:164] Run: docker container inspect functional-895781 --format={{.State.Status}}
I0914 17:03:11.064833   51994 ssh_runner.go:195] Run: systemctl --version
I0914 17:03:11.064889   51994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-895781
I0914 17:03:11.103390   51994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/functional-895781/id_rsa Username:docker}
I0914 17:03:11.208323   51994 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-895781 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-895781 | edb8c91690632 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| localhost/my-image                          | functional-895781 | acdb0a537b818 | 1.41MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kicbase/echo-server               | functional-895781 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-895781 image ls --format table --alsologtostderr:
I0914 17:03:15.341255   52376 out.go:345] Setting OutFile to fd 1 ...
I0914 17:03:15.341409   52376 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:03:15.341420   52376 out.go:358] Setting ErrFile to fd 2...
I0914 17:03:15.341426   52376 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:03:15.341681   52376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
I0914 17:03:15.342343   52376 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 17:03:15.342460   52376 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 17:03:15.342957   52376 cli_runner.go:164] Run: docker container inspect functional-895781 --format={{.State.Status}}
I0914 17:03:15.359568   52376 ssh_runner.go:195] Run: systemctl --version
I0914 17:03:15.359627   52376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-895781
I0914 17:03:15.377412   52376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/functional-895781/id_rsa Username:docker}
I0914 17:03:15.479780   52376 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-895781 image ls --format json --alsologtostderr:
[{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],
"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-895781"],"size":"4780000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"siz
e":"91600000"},{"id":"edb8c916906329a6401cb05b2d75a0476511ca3e6da49df323e7eb43a49a3c77","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-895781"],"size":"30"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"si
ze":"85000000"},{"id":"acdb0a537b818889d8101d60a86ceec38f30eb12a476aca74a2f85d47a0a8402","repoDigests":[],"repoTags":["localhost/my-image:functional-895781"],"size":"1410000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-895781 image ls --format json --alsologtostderr:
I0914 17:03:15.108120   52345 out.go:345] Setting OutFile to fd 1 ...
I0914 17:03:15.108299   52345 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:03:15.108310   52345 out.go:358] Setting ErrFile to fd 2...
I0914 17:03:15.108315   52345 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:03:15.108591   52345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
I0914 17:03:15.109486   52345 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 17:03:15.109651   52345 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 17:03:15.110196   52345 cli_runner.go:164] Run: docker container inspect functional-895781 --format={{.State.Status}}
I0914 17:03:15.131399   52345 ssh_runner.go:195] Run: systemctl --version
I0914 17:03:15.131473   52345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-895781
I0914 17:03:15.151009   52345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/functional-895781/id_rsa Username:docker}
I0914 17:03:15.256154   52345 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-895781 image ls --format yaml --alsologtostderr:
- id: edb8c916906329a6401cb05b2d75a0476511ca3e6da49df323e7eb43a49a3c77
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-895781
size: "30"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-895781
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-895781 image ls --format yaml --alsologtostderr:
I0914 17:03:11.324047   52027 out.go:345] Setting OutFile to fd 1 ...
I0914 17:03:11.324156   52027 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:03:11.324161   52027 out.go:358] Setting ErrFile to fd 2...
I0914 17:03:11.324166   52027 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:03:11.324421   52027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
I0914 17:03:11.325247   52027 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 17:03:11.325366   52027 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 17:03:11.325825   52027 cli_runner.go:164] Run: docker container inspect functional-895781 --format={{.State.Status}}
I0914 17:03:11.341685   52027 ssh_runner.go:195] Run: systemctl --version
I0914 17:03:11.341743   52027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-895781
I0914 17:03:11.363985   52027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/functional-895781/id_rsa Username:docker}
I0914 17:03:11.467961   52027 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-895781 ssh pgrep buildkitd: exit status 1 (371.622062ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image build -t localhost/my-image:functional-895781 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-895781 image build -t localhost/my-image:functional-895781 testdata/build --alsologtostderr: (2.923895387s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-895781 image build -t localhost/my-image:functional-895781 testdata/build --alsologtostderr:
I0914 17:03:11.946138   52118 out.go:345] Setting OutFile to fd 1 ...
I0914 17:03:11.946341   52118 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:03:11.946347   52118 out.go:358] Setting ErrFile to fd 2...
I0914 17:03:11.946353   52118 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:03:11.946603   52118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
I0914 17:03:11.947395   52118 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 17:03:11.948707   52118 config.go:182] Loaded profile config "functional-895781": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0914 17:03:11.949291   52118 cli_runner.go:164] Run: docker container inspect functional-895781 --format={{.State.Status}}
I0914 17:03:11.968291   52118 ssh_runner.go:195] Run: systemctl --version
I0914 17:03:11.968339   52118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-895781
I0914 17:03:11.989992   52118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/functional-895781/id_rsa Username:docker}
I0914 17:03:12.088513   52118 build_images.go:161] Building image from path: /tmp/build.2701309463.tar
I0914 17:03:12.088586   52118 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0914 17:03:12.101271   52118 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2701309463.tar
I0914 17:03:12.105571   52118 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2701309463.tar: stat -c "%s %y" /var/lib/minikube/build/build.2701309463.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2701309463.tar': No such file or directory
I0914 17:03:12.105598   52118 ssh_runner.go:362] scp /tmp/build.2701309463.tar --> /var/lib/minikube/build/build.2701309463.tar (3072 bytes)
I0914 17:03:12.154090   52118 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2701309463
I0914 17:03:12.165305   52118 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2701309463 -xf /var/lib/minikube/build/build.2701309463.tar
I0914 17:03:12.176516   52118 docker.go:360] Building image: /var/lib/minikube/build/build.2701309463
I0914 17:03:12.176636   52118 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-895781 /var/lib/minikube/build/build.2701309463
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:acdb0a537b818889d8101d60a86ceec38f30eb12a476aca74a2f85d47a0a8402 done
#8 naming to localhost/my-image:functional-895781 done
#8 DONE 0.1s
I0914 17:03:14.783226   52118 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-895781 /var/lib/minikube/build/build.2701309463: (2.606544674s)
I0914 17:03:14.783318   52118 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2701309463
I0914 17:03:14.792622   52118 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2701309463.tar
I0914 17:03:14.802040   52118 build_images.go:217] Built localhost/my-image:functional-895781 from /tmp/build.2701309463.tar
I0914 17:03:14.802072   52118 build_images.go:133] succeeded building to: functional-895781
I0914 17:03:14.802089   52118 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-895781
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-895781 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-895781 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [bc997e71-c26f-4dc8-897b-c41a04be27a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [bc997e71-c26f-4dc8-897b-c41a04be27a4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.005367977s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image load --daemon kicbase/echo-server:functional-895781 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image load --daemon kicbase/echo-server:functional-895781 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-895781
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image load --daemon kicbase/echo-server:functional-895781 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image save kicbase/echo-server:functional-895781 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image rm kicbase/echo-server:functional-895781 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-895781
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 image save --daemon kicbase/echo-server:functional-895781 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-895781
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-895781 docker-env) && out/minikube-linux-arm64 status -p functional-895781"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-895781 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 update-context --alsologtostderr -v=2
2024/09/14 17:03:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-895781 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.150.118 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-895781 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-895781 /tmp/TestFunctionalparallelMountCmdany-port3845574861/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726333354032790985" to /tmp/TestFunctionalparallelMountCmdany-port3845574861/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726333354032790985" to /tmp/TestFunctionalparallelMountCmdany-port3845574861/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726333354032790985" to /tmp/TestFunctionalparallelMountCmdany-port3845574861/001/test-1726333354032790985
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-895781 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (563.300636ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 14 17:02 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 14 17:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 14 17:02 test-1726333354032790985
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh cat /mount-9p/test-1726333354032790985
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-895781 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d448e1b2-459f-4f73-b95d-341083621678] Pending
helpers_test.go:344: "busybox-mount" [d448e1b2-459f-4f73-b95d-341083621678] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d448e1b2-459f-4f73-b95d-341083621678] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d448e1b2-459f-4f73-b95d-341083621678] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003396945s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-895781 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-895781 /tmp/TestFunctionalparallelMountCmdany-port3845574861/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-895781 /tmp/TestFunctionalparallelMountCmdspecific-port1512948924/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-895781 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (533.835289ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-895781 /tmp/TestFunctionalparallelMountCmdspecific-port1512948924/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-895781 ssh "sudo umount -f /mount-9p": exit status 1 (356.77352ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-895781 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-895781 /tmp/TestFunctionalparallelMountCmdspecific-port1512948924/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-895781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1722821853/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-895781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1722821853/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-895781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1722821853/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "findmnt -T" /mount1
E0914 17:02:45.634949    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:02:45.642259    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:02:45.654136    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:02:45.675895    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-895781 ssh "findmnt -T" /mount1: (1.188469994s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "findmnt -T" /mount2
E0914 17:02:45.717168    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:02:45.801121    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:02:45.968290    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 ssh "findmnt -T" /mount3
E0914 17:02:46.289633    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-895781 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-895781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1722821853/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-895781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1722821853/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-895781 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1722821853/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-895781 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-895781 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-mkgmv" [0991836d-cc44-46d4-a2fd-e9bbe8d5ccf5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-mkgmv" [0991836d-cc44-46d4-a2fd-e9bbe8d5ccf5] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005779245s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "344.64175ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "55.039757ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "352.084021ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "60.446942ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 service list
E0914 17:03:06.136948    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1459: (dbg) Done: out/minikube-linux-arm64 -p functional-895781 service list: (1.438950032s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-arm64 -p functional-895781 service list -o json: (1.414604669s)
functional_test.go:1494: Took "1.414693728s" to run "out/minikube-linux-arm64 -p functional-895781 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30133
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-895781 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30133
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-895781
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-895781
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-895781
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (123.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-679744 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0914 17:03:26.618841    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:04:07.580519    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-679744 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m2.875719027s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (123.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (61.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-679744 -- rollout status deployment/busybox: (4.942314184s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0914 17:05:29.501806    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-9w5ss -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-cw5gg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-s54ms -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-9w5ss -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-cw5gg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-s54ms -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-9w5ss -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-cw5gg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-s54ms -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (61.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-9w5ss -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-9w5ss -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-cw5gg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-cw5gg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-s54ms -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-679744 -- exec busybox-7dff88458-s54ms -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-679744 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-679744 -v=7 --alsologtostderr: (23.634309004s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr: (1.031378423s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-679744 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-679744 status --output json -v=7 --alsologtostderr: (1.054977074s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp testdata/cp-test.txt ha-679744:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2598392030/001/cp-test_ha-679744.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744:/home/docker/cp-test.txt ha-679744-m02:/home/docker/cp-test_ha-679744_ha-679744-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m02 "sudo cat /home/docker/cp-test_ha-679744_ha-679744-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744:/home/docker/cp-test.txt ha-679744-m03:/home/docker/cp-test_ha-679744_ha-679744-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m03 "sudo cat /home/docker/cp-test_ha-679744_ha-679744-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744:/home/docker/cp-test.txt ha-679744-m04:/home/docker/cp-test_ha-679744_ha-679744-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m04 "sudo cat /home/docker/cp-test_ha-679744_ha-679744-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp testdata/cp-test.txt ha-679744-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2598392030/001/cp-test_ha-679744-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m02:/home/docker/cp-test.txt ha-679744:/home/docker/cp-test_ha-679744-m02_ha-679744.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744 "sudo cat /home/docker/cp-test_ha-679744-m02_ha-679744.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m02:/home/docker/cp-test.txt ha-679744-m03:/home/docker/cp-test_ha-679744-m02_ha-679744-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m03 "sudo cat /home/docker/cp-test_ha-679744-m02_ha-679744-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m02:/home/docker/cp-test.txt ha-679744-m04:/home/docker/cp-test_ha-679744-m02_ha-679744-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m04 "sudo cat /home/docker/cp-test_ha-679744-m02_ha-679744-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp testdata/cp-test.txt ha-679744-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2598392030/001/cp-test_ha-679744-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m03:/home/docker/cp-test.txt ha-679744:/home/docker/cp-test_ha-679744-m03_ha-679744.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744 "sudo cat /home/docker/cp-test_ha-679744-m03_ha-679744.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m03:/home/docker/cp-test.txt ha-679744-m02:/home/docker/cp-test_ha-679744-m03_ha-679744-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m02 "sudo cat /home/docker/cp-test_ha-679744-m03_ha-679744-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m03:/home/docker/cp-test.txt ha-679744-m04:/home/docker/cp-test_ha-679744-m03_ha-679744-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m04 "sudo cat /home/docker/cp-test_ha-679744-m03_ha-679744-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp testdata/cp-test.txt ha-679744-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2598392030/001/cp-test_ha-679744-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m04:/home/docker/cp-test.txt ha-679744:/home/docker/cp-test_ha-679744-m04_ha-679744.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744 "sudo cat /home/docker/cp-test_ha-679744-m04_ha-679744.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m04:/home/docker/cp-test.txt ha-679744-m02:/home/docker/cp-test_ha-679744-m04_ha-679744-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m02 "sudo cat /home/docker/cp-test_ha-679744-m04_ha-679744-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 cp ha-679744-m04:/home/docker/cp-test.txt ha-679744-m03:/home/docker/cp-test_ha-679744-m04_ha-679744-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 ssh -n ha-679744-m03 "sudo cat /home/docker/cp-test_ha-679744-m04_ha-679744-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-679744 node stop m02 -v=7 --alsologtostderr: (10.959348912s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr
E0914 17:07:23.734947    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:23.741320    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:23.752703    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:23.774093    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:23.816188    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr: exit status 7 (767.120231ms)

                                                
                                                
-- stdout --
	ha-679744
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-679744-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-679744-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-679744-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:07:23.104403   75163 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:07:23.104578   75163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:07:23.104601   75163 out.go:358] Setting ErrFile to fd 2...
	I0914 17:07:23.104626   75163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:07:23.104899   75163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
	I0914 17:07:23.105111   75163 out.go:352] Setting JSON to false
	I0914 17:07:23.105173   75163 mustload.go:65] Loading cluster: ha-679744
	I0914 17:07:23.105257   75163 notify.go:220] Checking for updates...
	I0914 17:07:23.106195   75163 config.go:182] Loaded profile config "ha-679744": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 17:07:23.106239   75163 status.go:255] checking status of ha-679744 ...
	I0914 17:07:23.106764   75163 cli_runner.go:164] Run: docker container inspect ha-679744 --format={{.State.Status}}
	I0914 17:07:23.124795   75163 status.go:330] ha-679744 host status = "Running" (err=<nil>)
	I0914 17:07:23.124871   75163 host.go:66] Checking if "ha-679744" exists ...
	I0914 17:07:23.125170   75163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-679744
	I0914 17:07:23.149504   75163 host.go:66] Checking if "ha-679744" exists ...
	I0914 17:07:23.149805   75163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:07:23.149853   75163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-679744
	I0914 17:07:23.170962   75163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/ha-679744/id_rsa Username:docker}
	I0914 17:07:23.272766   75163 ssh_runner.go:195] Run: systemctl --version
	I0914 17:07:23.277279   75163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:07:23.289684   75163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:07:23.372041   75163 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-14 17:07:23.354284627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:07:23.372638   75163 kubeconfig.go:125] found "ha-679744" server: "https://192.168.49.254:8443"
	I0914 17:07:23.372672   75163 api_server.go:166] Checking apiserver status ...
	I0914 17:07:23.372724   75163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:07:23.385000   75163 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2309/cgroup
	I0914 17:07:23.394595   75163 api_server.go:182] apiserver freezer: "10:freezer:/docker/c54d4351bc8fee5c089360a2d83a31d542f30199df6446510540e6a3b816dcc7/kubepods/burstable/pod2f7e53e8eaa29a0c1b6d823d9e1c102b/d5c5568734079d4733e9f580a01afc30ccd2f932fe3983b8741a33863922a2f9"
	I0914 17:07:23.394673   75163 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c54d4351bc8fee5c089360a2d83a31d542f30199df6446510540e6a3b816dcc7/kubepods/burstable/pod2f7e53e8eaa29a0c1b6d823d9e1c102b/d5c5568734079d4733e9f580a01afc30ccd2f932fe3983b8741a33863922a2f9/freezer.state
	I0914 17:07:23.403727   75163 api_server.go:204] freezer state: "THAWED"
	I0914 17:07:23.403757   75163 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0914 17:07:23.412720   75163 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0914 17:07:23.412748   75163 status.go:422] ha-679744 apiserver status = Running (err=<nil>)
	I0914 17:07:23.412758   75163 status.go:257] ha-679744 status: &{Name:ha-679744 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:07:23.412775   75163 status.go:255] checking status of ha-679744-m02 ...
	I0914 17:07:23.413110   75163 cli_runner.go:164] Run: docker container inspect ha-679744-m02 --format={{.State.Status}}
	I0914 17:07:23.434541   75163 status.go:330] ha-679744-m02 host status = "Stopped" (err=<nil>)
	I0914 17:07:23.434595   75163 status.go:343] host is not running, skipping remaining checks
	I0914 17:07:23.434603   75163 status.go:257] ha-679744-m02 status: &{Name:ha-679744-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:07:23.434627   75163 status.go:255] checking status of ha-679744-m03 ...
	I0914 17:07:23.434953   75163 cli_runner.go:164] Run: docker container inspect ha-679744-m03 --format={{.State.Status}}
	I0914 17:07:23.452221   75163 status.go:330] ha-679744-m03 host status = "Running" (err=<nil>)
	I0914 17:07:23.452245   75163 host.go:66] Checking if "ha-679744-m03" exists ...
	I0914 17:07:23.452542   75163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-679744-m03
	I0914 17:07:23.468721   75163 host.go:66] Checking if "ha-679744-m03" exists ...
	I0914 17:07:23.469059   75163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:07:23.469108   75163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-679744-m03
	I0914 17:07:23.486423   75163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/ha-679744-m03/id_rsa Username:docker}
	I0914 17:07:23.584696   75163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:07:23.598453   75163 kubeconfig.go:125] found "ha-679744" server: "https://192.168.49.254:8443"
	I0914 17:07:23.598479   75163 api_server.go:166] Checking apiserver status ...
	I0914 17:07:23.598522   75163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:07:23.611372   75163 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2113/cgroup
	I0914 17:07:23.623093   75163 api_server.go:182] apiserver freezer: "10:freezer:/docker/9fbcabb47b2f6768e3573ccaa7de43b50ac1ed4e695840cd1a4304517ce5b1b9/kubepods/burstable/pod8050d3298331126f46e223be47e5af7f/fb69de69134613467b37182de26443fae31ade4c6e9f8d601ea39ee45754bb89"
	I0914 17:07:23.623215   75163 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9fbcabb47b2f6768e3573ccaa7de43b50ac1ed4e695840cd1a4304517ce5b1b9/kubepods/burstable/pod8050d3298331126f46e223be47e5af7f/fb69de69134613467b37182de26443fae31ade4c6e9f8d601ea39ee45754bb89/freezer.state
	I0914 17:07:23.633087   75163 api_server.go:204] freezer state: "THAWED"
	I0914 17:07:23.633118   75163 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0914 17:07:23.641106   75163 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0914 17:07:23.641133   75163 status.go:422] ha-679744-m03 apiserver status = Running (err=<nil>)
	I0914 17:07:23.641144   75163 status.go:257] ha-679744-m03 status: &{Name:ha-679744-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:07:23.641161   75163 status.go:255] checking status of ha-679744-m04 ...
	I0914 17:07:23.641480   75163 cli_runner.go:164] Run: docker container inspect ha-679744-m04 --format={{.State.Status}}
	I0914 17:07:23.659356   75163 status.go:330] ha-679744-m04 host status = "Running" (err=<nil>)
	I0914 17:07:23.659430   75163 host.go:66] Checking if "ha-679744-m04" exists ...
	I0914 17:07:23.659762   75163 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-679744-m04
	I0914 17:07:23.676552   75163 host.go:66] Checking if "ha-679744-m04" exists ...
	I0914 17:07:23.676849   75163 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:07:23.676896   75163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-679744-m04
	I0914 17:07:23.694540   75163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/ha-679744-m04/id_rsa Username:docker}
	I0914 17:07:23.801410   75163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:07:23.814669   75163 status.go:257] ha-679744-m04 status: &{Name:ha-679744-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0914 17:07:23.898093    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:24.059572    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:24.381526    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (65.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 node start m02 -v=7 --alsologtostderr
E0914 17:07:25.023670    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:26.305230    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:28.867321    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:33.989520    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:44.231625    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:07:45.632999    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:08:04.712949    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:08:13.343292    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-679744 node start m02 -v=7 --alsologtostderr: (1m4.693006142s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr: (1.030853737s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (65.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (186.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-679744 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-679744 -v=7 --alsologtostderr
E0914 17:08:45.675238    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-679744 -v=7 --alsologtostderr: (34.25344406s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-679744 --wait=true -v=7 --alsologtostderr
E0914 17:10:07.598193    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-679744 --wait=true -v=7 --alsologtostderr: (2m31.832470186s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-679744
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (186.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-679744 node delete m03 -v=7 --alsologtostderr: (10.817604326s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-679744 stop -v=7 --alsologtostderr: (33.359773456s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr: exit status 7 (114.347942ms)

                                                
                                                
-- stdout --
	ha-679744
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-679744-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-679744-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:12:22.979715  101547 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:12:22.979929  101547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:22.979954  101547 out.go:358] Setting ErrFile to fd 2...
	I0914 17:12:22.979974  101547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:12:22.980251  101547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
	I0914 17:12:22.980479  101547 out.go:352] Setting JSON to false
	I0914 17:12:22.980535  101547 mustload.go:65] Loading cluster: ha-679744
	I0914 17:12:22.980581  101547 notify.go:220] Checking for updates...
	I0914 17:12:22.981037  101547 config.go:182] Loaded profile config "ha-679744": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 17:12:22.981358  101547 status.go:255] checking status of ha-679744 ...
	I0914 17:12:22.981976  101547 cli_runner.go:164] Run: docker container inspect ha-679744 --format={{.State.Status}}
	I0914 17:12:22.999834  101547 status.go:330] ha-679744 host status = "Stopped" (err=<nil>)
	I0914 17:12:22.999856  101547 status.go:343] host is not running, skipping remaining checks
	I0914 17:12:22.999863  101547 status.go:257] ha-679744 status: &{Name:ha-679744 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:22.999901  101547 status.go:255] checking status of ha-679744-m02 ...
	I0914 17:12:23.000203  101547 cli_runner.go:164] Run: docker container inspect ha-679744-m02 --format={{.State.Status}}
	I0914 17:12:23.026043  101547 status.go:330] ha-679744-m02 host status = "Stopped" (err=<nil>)
	I0914 17:12:23.026062  101547 status.go:343] host is not running, skipping remaining checks
	I0914 17:12:23.026070  101547 status.go:257] ha-679744-m02 status: &{Name:ha-679744-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:12:23.026091  101547 status.go:255] checking status of ha-679744-m04 ...
	I0914 17:12:23.026409  101547 cli_runner.go:164] Run: docker container inspect ha-679744-m04 --format={{.State.Status}}
	I0914 17:12:23.048276  101547 status.go:330] ha-679744-m04 host status = "Stopped" (err=<nil>)
	I0914 17:12:23.048300  101547 status.go:343] host is not running, skipping remaining checks
	I0914 17:12:23.048308  101547 status.go:257] ha-679744-m04 status: &{Name:ha-679744-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (87.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-679744 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0914 17:12:23.734858    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:12:45.632422    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:12:51.440107    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-679744 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m26.500276797s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (87.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-679744 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-679744 --control-plane -v=7 --alsologtostderr: (41.381736038s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-679744 status -v=7 --alsologtostderr: (1.143712068s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (32.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-838119 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-838119 --driver=docker  --container-runtime=docker: (32.7889262s)
--- PASS: TestImageBuild/serial/Setup (32.79s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-838119
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-838119: (1.773555245s)
--- PASS: TestImageBuild/serial/NormalBuild (1.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-838119
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-838119
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-838119
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.28s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-743471 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-743471 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m19.278577629s)
--- PASS: TestJSONOutput/start/Command (79.28s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-743471 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-743471 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-743471 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-743471 --output=json --user=testUser: (5.752264606s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-116298 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-116298 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.776782ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a110764e-ed05-43d4-aafd-5fb6de78351e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-116298] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd7d64a8-e0f3-4c13-a7eb-35412b4b04c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19643"}}
	{"specversion":"1.0","id":"bacb7046-af2c-4c34-903b-f2991d1ceb41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ce1c8996-524a-4631-9f1a-8db0e0902531","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig"}}
	{"specversion":"1.0","id":"0049df6c-7281-4754-b0c7-4bca280486e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube"}}
	{"specversion":"1.0","id":"c0069b94-35cb-4de9-bd5c-e77d6645207c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"11cdbbbf-84b4-45df-bd8f-f13c5c1084d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c49a3056-809f-408b-b0de-e2bc19f200d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-116298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-116298
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-410107 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-410107 --network=: (31.139575845s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-410107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-410107
E0914 17:17:23.735339    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-410107: (2.169931661s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.77s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-446958 --network=bridge
E0914 17:17:45.632342    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-446958 --network=bridge: (34.458413962s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-446958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-446958
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-446958: (2.286164245s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.77s)

                                                
                                    
x
+
TestKicExistingNetwork (34.23s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-761542 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-761542 --network=existing-network: (32.436973603s)
helpers_test.go:175: Cleaning up "existing-network-761542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-761542
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-761542: (1.643686111s)
--- PASS: TestKicExistingNetwork (34.23s)

                                                
                                    
x
+
TestKicCustomSubnet (34.19s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-540551 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-540551 --subnet=192.168.60.0/24: (32.034559734s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-540551 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-540551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-540551
E0914 17:19:08.704714    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-540551: (2.125231418s)
--- PASS: TestKicCustomSubnet (34.19s)

                                                
                                    
x
+
TestKicStaticIP (31.87s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-646763 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-646763 --static-ip=192.168.200.200: (29.540323154s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-646763 ip
helpers_test.go:175: Cleaning up "static-ip-646763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-646763
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-646763: (2.173876822s)
--- PASS: TestKicStaticIP (31.87s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (75.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-135048 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-135048 --driver=docker  --container-runtime=docker: (34.429340322s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-137953 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-137953 --driver=docker  --container-runtime=docker: (35.728292306s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-135048
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-137953
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-137953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-137953
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-137953: (2.115548147s)
helpers_test.go:175: Cleaning up "first-135048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-135048
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-135048: (2.082938822s)
--- PASS: TestMinikubeProfile (75.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-260241 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-260241 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.799113931s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-260241 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-262210 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-262210 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.444661001s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-262210 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-260241 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-260241 --alsologtostderr -v=5: (1.480430526s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-262210 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-262210
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-262210: (1.216233932s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-262210
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-262210: (7.43323583s)
--- PASS: TestMountStart/serial/RestartStopped (8.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-262210 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-859350 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0914 17:22:23.735107    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-859350 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m1.721675647s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (37.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-859350 -- rollout status deployment/busybox: (3.744799017s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0914 17:22:45.632924    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- exec busybox-7dff88458-2qrrp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- exec busybox-7dff88458-pmtzp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- exec busybox-7dff88458-2qrrp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- exec busybox-7dff88458-pmtzp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- exec busybox-7dff88458-2qrrp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- exec busybox-7dff88458-pmtzp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (37.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- exec busybox-7dff88458-2qrrp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- exec busybox-7dff88458-2qrrp -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- exec busybox-7dff88458-pmtzp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-859350 -- exec busybox-7dff88458-pmtzp -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-859350 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-859350 -v 3 --alsologtostderr: (16.819930315s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-859350 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp testdata/cp-test.txt multinode-859350:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp multinode-859350:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2963598172/001/cp-test_multinode-859350.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp multinode-859350:/home/docker/cp-test.txt multinode-859350-m02:/home/docker/cp-test_multinode-859350_multinode-859350-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m02 "sudo cat /home/docker/cp-test_multinode-859350_multinode-859350-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp multinode-859350:/home/docker/cp-test.txt multinode-859350-m03:/home/docker/cp-test_multinode-859350_multinode-859350-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m03 "sudo cat /home/docker/cp-test_multinode-859350_multinode-859350-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp testdata/cp-test.txt multinode-859350-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp multinode-859350-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2963598172/001/cp-test_multinode-859350-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp multinode-859350-m02:/home/docker/cp-test.txt multinode-859350:/home/docker/cp-test_multinode-859350-m02_multinode-859350.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350 "sudo cat /home/docker/cp-test_multinode-859350-m02_multinode-859350.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp multinode-859350-m02:/home/docker/cp-test.txt multinode-859350-m03:/home/docker/cp-test_multinode-859350-m02_multinode-859350-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m03 "sudo cat /home/docker/cp-test_multinode-859350-m02_multinode-859350-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp testdata/cp-test.txt multinode-859350-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp multinode-859350-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2963598172/001/cp-test_multinode-859350-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp multinode-859350-m03:/home/docker/cp-test.txt multinode-859350:/home/docker/cp-test_multinode-859350-m03_multinode-859350.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350 "sudo cat /home/docker/cp-test_multinode-859350-m03_multinode-859350.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 cp multinode-859350-m03:/home/docker/cp-test.txt multinode-859350-m02:/home/docker/cp-test_multinode-859350-m03_multinode-859350-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 ssh -n multinode-859350-m02 "sudo cat /home/docker/cp-test_multinode-859350-m03_multinode-859350-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-859350 node stop m03: (1.216471652s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-859350 status: exit status 7 (551.099523ms)

                                                
                                                
-- stdout --
	multinode-859350
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-859350-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-859350-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-859350 status --alsologtostderr: exit status 7 (552.183032ms)

                                                
                                                
-- stdout --
	multinode-859350
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-859350-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-859350-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:23:40.485012  175749 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:23:40.485196  175749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:23:40.485225  175749 out.go:358] Setting ErrFile to fd 2...
	I0914 17:23:40.485244  175749 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:23:40.485518  175749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
	I0914 17:23:40.485731  175749 out.go:352] Setting JSON to false
	I0914 17:23:40.485789  175749 mustload.go:65] Loading cluster: multinode-859350
	I0914 17:23:40.485873  175749 notify.go:220] Checking for updates...
	I0914 17:23:40.486281  175749 config.go:182] Loaded profile config "multinode-859350": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 17:23:40.486314  175749 status.go:255] checking status of multinode-859350 ...
	I0914 17:23:40.486935  175749 cli_runner.go:164] Run: docker container inspect multinode-859350 --format={{.State.Status}}
	I0914 17:23:40.516871  175749 status.go:330] multinode-859350 host status = "Running" (err=<nil>)
	I0914 17:23:40.516894  175749 host.go:66] Checking if "multinode-859350" exists ...
	I0914 17:23:40.517220  175749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-859350
	I0914 17:23:40.559409  175749 host.go:66] Checking if "multinode-859350" exists ...
	I0914 17:23:40.559734  175749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:23:40.559779  175749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-859350
	I0914 17:23:40.589310  175749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/multinode-859350/id_rsa Username:docker}
	I0914 17:23:40.689054  175749 ssh_runner.go:195] Run: systemctl --version
	I0914 17:23:40.693940  175749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:23:40.705593  175749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:23:40.758864  175749 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-14 17:23:40.748487199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:23:40.759577  175749 kubeconfig.go:125] found "multinode-859350" server: "https://192.168.67.2:8443"
	I0914 17:23:40.759612  175749 api_server.go:166] Checking apiserver status ...
	I0914 17:23:40.759664  175749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:23:40.771825  175749 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2284/cgroup
	I0914 17:23:40.781531  175749 api_server.go:182] apiserver freezer: "10:freezer:/docker/27bda7eab0d287cfe20b25e924db4f4e2ab696de9765b7593cb592ed91fe364a/kubepods/burstable/podfa7eff56c6dbf6ca90bc1252771fa6c8/ac6484183d0d3ead40de91610051b1ec2ba94323ecc99ed4380f78abc93108ac"
	I0914 17:23:40.781623  175749 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/27bda7eab0d287cfe20b25e924db4f4e2ab696de9765b7593cb592ed91fe364a/kubepods/burstable/podfa7eff56c6dbf6ca90bc1252771fa6c8/ac6484183d0d3ead40de91610051b1ec2ba94323ecc99ed4380f78abc93108ac/freezer.state
	I0914 17:23:40.790788  175749 api_server.go:204] freezer state: "THAWED"
	I0914 17:23:40.790819  175749 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 17:23:40.798623  175749 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0914 17:23:40.798654  175749 status.go:422] multinode-859350 apiserver status = Running (err=<nil>)
	I0914 17:23:40.798666  175749 status.go:257] multinode-859350 status: &{Name:multinode-859350 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:23:40.798683  175749 status.go:255] checking status of multinode-859350-m02 ...
	I0914 17:23:40.798990  175749 cli_runner.go:164] Run: docker container inspect multinode-859350-m02 --format={{.State.Status}}
	I0914 17:23:40.816319  175749 status.go:330] multinode-859350-m02 host status = "Running" (err=<nil>)
	I0914 17:23:40.816344  175749 host.go:66] Checking if "multinode-859350-m02" exists ...
	I0914 17:23:40.816655  175749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-859350-m02
	I0914 17:23:40.833414  175749 host.go:66] Checking if "multinode-859350-m02" exists ...
	I0914 17:23:40.833739  175749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:23:40.833790  175749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-859350-m02
	I0914 17:23:40.851276  175749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19643-2222/.minikube/machines/multinode-859350-m02/id_rsa Username:docker}
	I0914 17:23:40.948720  175749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:23:40.960577  175749 status.go:257] multinode-859350-m02 status: &{Name:multinode-859350-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:23:40.960633  175749 status.go:255] checking status of multinode-859350-m03 ...
	I0914 17:23:40.960933  175749 cli_runner.go:164] Run: docker container inspect multinode-859350-m03 --format={{.State.Status}}
	I0914 17:23:40.980463  175749 status.go:330] multinode-859350-m03 host status = "Stopped" (err=<nil>)
	I0914 17:23:40.980490  175749 status.go:343] host is not running, skipping remaining checks
	I0914 17:23:40.980498  175749 status.go:257] multinode-859350-m03 status: &{Name:multinode-859350-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 node start m03 -v=7 --alsologtostderr
E0914 17:23:46.801656    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-859350 node start m03 -v=7 --alsologtostderr: (10.581423517s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (103.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-859350
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-859350
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-859350: (22.761754998s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-859350 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-859350 --wait=true -v=8 --alsologtostderr: (1m20.536642923s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-859350
--- PASS: TestMultiNode/serial/RestartKeepsNodes (103.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-859350 node delete m03: (4.993849152s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-859350 stop: (21.438074248s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-859350 status: exit status 7 (90.572417ms)

                                                
                                                
-- stdout --
	multinode-859350
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-859350-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-859350 status --alsologtostderr: exit status 7 (102.309781ms)

                                                
                                                
-- stdout --
	multinode-859350
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-859350-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:26:03.048960  189355 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:26:03.049153  189355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:26:03.049184  189355 out.go:358] Setting ErrFile to fd 2...
	I0914 17:26:03.049206  189355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:26:03.049506  189355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-2222/.minikube/bin
	I0914 17:26:03.049747  189355 out.go:352] Setting JSON to false
	I0914 17:26:03.049826  189355 mustload.go:65] Loading cluster: multinode-859350
	I0914 17:26:03.049893  189355 notify.go:220] Checking for updates...
	I0914 17:26:03.050347  189355 config.go:182] Loaded profile config "multinode-859350": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 17:26:03.050387  189355 status.go:255] checking status of multinode-859350 ...
	I0914 17:26:03.050999  189355 cli_runner.go:164] Run: docker container inspect multinode-859350 --format={{.State.Status}}
	I0914 17:26:03.070901  189355 status.go:330] multinode-859350 host status = "Stopped" (err=<nil>)
	I0914 17:26:03.070931  189355 status.go:343] host is not running, skipping remaining checks
	I0914 17:26:03.070939  189355 status.go:257] multinode-859350 status: &{Name:multinode-859350 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:26:03.070979  189355 status.go:255] checking status of multinode-859350-m02 ...
	I0914 17:26:03.071321  189355 cli_runner.go:164] Run: docker container inspect multinode-859350-m02 --format={{.State.Status}}
	I0914 17:26:03.097111  189355 status.go:330] multinode-859350-m02 host status = "Stopped" (err=<nil>)
	I0914 17:26:03.097138  189355 status.go:343] host is not running, skipping remaining checks
	I0914 17:26:03.097146  189355 status.go:257] multinode-859350-m02 status: &{Name:multinode-859350-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-859350 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-859350 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (49.605869278s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-859350 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-859350
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-859350-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-859350-m02 --driver=docker  --container-runtime=docker: exit status 14 (77.022324ms)

                                                
                                                
-- stdout --
	* [multinode-859350-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-859350-m02' is duplicated with machine name 'multinode-859350-m02' in profile 'multinode-859350'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-859350-m03 --driver=docker  --container-runtime=docker
E0914 17:27:23.734946    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-859350-m03 --driver=docker  --container-runtime=docker: (31.747986096s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-859350
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-859350: exit status 80 (338.507828ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-859350 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-859350-m03 already exists in multinode-859350-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-859350-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-859350-m03: (2.076722313s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.29s)

                                                
                                    
x
+
TestPreload (102.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-428089 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0914 17:27:45.632999    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-428089 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m4.630090583s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-428089 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-428089 image pull gcr.io/k8s-minikube/busybox: (2.139433855s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-428089
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-428089: (10.675232142s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-428089 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-428089 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (22.078475898s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-428089 image list
helpers_test.go:175: Cleaning up "test-preload-428089" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-428089
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-428089: (2.188011446s)
--- PASS: TestPreload (102.08s)

                                                
                                    
x
+
TestScheduledStopUnix (105.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-793629 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-793629 --memory=2048 --driver=docker  --container-runtime=docker: (32.722588905s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-793629 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-793629 -n scheduled-stop-793629
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-793629 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-793629 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-793629 -n scheduled-stop-793629
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-793629
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-793629 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-793629
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-793629: exit status 7 (68.754318ms)

                                                
                                                
-- stdout --
	scheduled-stop-793629
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-793629 -n scheduled-stop-793629
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-793629 -n scheduled-stop-793629: exit status 7 (78.838712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-793629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-793629
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-793629: (1.687444132s)
--- PASS: TestScheduledStopUnix (105.98s)

                                                
                                    
x
+
TestSkaffold (116.2s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4035222349 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-689497 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-689497 --memory=2600 --driver=docker  --container-runtime=docker: (29.992666242s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4035222349 run --minikube-profile skaffold-689497 --kube-context skaffold-689497 --status-check=true --port-forward=false --interactive=false
E0914 17:32:23.735625    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4035222349 run --minikube-profile skaffold-689497 --kube-context skaffold-689497 --status-check=true --port-forward=false --interactive=false: (1m10.621270641s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-dfdfcdd74-nxmzw" [b316becc-242d-4967-9dfa-dd69a0f758cf] Running
E0914 17:32:45.632117    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003155955s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-65b77d99b7-8hrlj" [6df2559a-770f-4425-a222-6ed38092a02f] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003908389s
helpers_test.go:175: Cleaning up "skaffold-689497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-689497
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-689497: (2.877319989s)
--- PASS: TestSkaffold (116.20s)

                                                
                                    
x
+
TestInsufficientStorage (11.05s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-120256 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-120256 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.759863999s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4bace857-4dc0-498e-a01f-468846784c2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-120256] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92a4c9d3-9381-4fcd-876a-1d6a5dad83a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19643"}}
	{"specversion":"1.0","id":"f671414c-7aca-4fd8-8f71-c5d2fec04c14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2a8ee7c8-7c25-4fac-939e-7a4c9a4b54b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig"}}
	{"specversion":"1.0","id":"c4a13e84-2e13-4715-a018-a5152596611f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube"}}
	{"specversion":"1.0","id":"aa4bd9f4-acde-4917-bbe4-88381f14cf40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"00d9b2b5-a58c-4fe3-b7a4-96a059285b61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2065848b-131c-4455-af40-5871407be487","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"aeb70839-b80d-4c1f-acbe-c118cc275098","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c1b53f9a-ad40-4ba0-b3b2-c8416332cb0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ede1f4c9-b3fe-4f11-9d4d-fc78d9204fd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e572516e-7edb-4fc9-b2fa-efb8d38e6f02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-120256\" primary control-plane node in \"insufficient-storage-120256\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"31a0f20e-eb86-4ddd-b862-b1573496ca6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726281268-19643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0a0c1a4-d5aa-49b9-9be3-29e9ad7e46c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1274cb3d-1edb-47d8-b9db-8e9c5a6f0e48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-120256 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-120256 --output=json --layout=cluster: exit status 7 (279.05303ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-120256","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-120256","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 17:33:04.986949  223290 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-120256" does not appear in /home/jenkins/minikube-integration/19643-2222/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-120256 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-120256 --output=json --layout=cluster: exit status 7 (313.220908ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-120256","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-120256","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 17:33:05.302096  223353 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-120256" does not appear in /home/jenkins/minikube-integration/19643-2222/kubeconfig
	E0914 17:33:05.312729  223353 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/insufficient-storage-120256/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-120256" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-120256
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-120256: (1.692980217s)
--- PASS: TestInsufficientStorage (11.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (89.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.268318180 start -p running-upgrade-890855 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0914 17:39:04.003146    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.268318180 start -p running-upgrade-890855 --memory=2200 --vm-driver=docker  --container-runtime=docker: (33.673488912s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-890855 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0914 17:40:25.924508    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:40:26.803182    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-890855 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.665919557s)
helpers_test.go:175: Cleaning up "running-upgrade-890855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-890855
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-890855: (2.59627113s)
--- PASS: TestRunningBinaryUpgrade (89.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (388.87s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-693102 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-693102 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.025425562s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-693102
E0914 17:35:48.705969    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-693102: (10.971642582s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-693102 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-693102 status --format={{.Host}}: exit status 7 (100.896821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-693102 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-693102 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m40.578889145s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-693102 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-693102 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-693102 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (111.283129ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-693102] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-693102
	    minikube start -p kubernetes-upgrade-693102 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6931022 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-693102 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-693102 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-693102 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.076399309s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-693102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-693102
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-693102: (2.871593246s)
--- PASS: TestKubernetesUpgrade (388.87s)

                                                
                                    
x
+
TestMissingContainerUpgrade (169.71s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.946083282 start -p missing-upgrade-087006 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.946083282 start -p missing-upgrade-087006 --memory=2200 --driver=docker  --container-runtime=docker: (1m37.252598385s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-087006
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-087006: (10.340488976s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-087006
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-087006 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-087006 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (58.882398733s)
helpers_test.go:175: Cleaning up "missing-upgrade-087006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-087006
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-087006: (2.491071672s)
--- PASS: TestMissingContainerUpgrade (169.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-277057 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-277057 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (107.189173ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-277057] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-2222/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-2222/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-277057 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-277057 --driver=docker  --container-runtime=docker: (44.596965796s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-277057 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-277057 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-277057 --no-kubernetes --driver=docker  --container-runtime=docker: (16.851083186s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-277057 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-277057 status -o json: exit status 2 (323.189167ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-277057","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-277057
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-277057: (1.72697056s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-277057 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-277057 --no-kubernetes --driver=docker  --container-runtime=docker: (7.766019081s)
--- PASS: TestNoKubernetes/serial/Start (7.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-277057 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-277057 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.943802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-277057
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-277057: (1.21914508s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-277057 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-277057 --driver=docker  --container-runtime=docker: (7.287502548s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-277057 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-277057 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.827723ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (93.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1807448803 start -p stopped-upgrade-548476 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0914 17:37:23.734928    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:42.063374    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:42.069984    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:42.081374    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:42.102724    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:42.144462    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:42.226652    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:42.388121    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:42.709544    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:43.351552    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:44.633313    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:45.632070    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:47.195268    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:37:52.316991    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:38:02.558974    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1807448803 start -p stopped-upgrade-548476 --memory=2200 --vm-driver=docker  --container-runtime=docker: (49.829510494s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1807448803 -p stopped-upgrade-548476 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1807448803 -p stopped-upgrade-548476 stop: (10.82490572s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-548476 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0914 17:38:23.041246    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-548476 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.864874974s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (93.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-548476
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-548476: (1.416952596s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                    
x
+
TestPause/serial/Start (74.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-956309 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-956309 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m14.899528178s)
--- PASS: TestPause/serial/Start (74.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.77s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-956309 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-956309 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.745889058s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.77s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-956309 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-956309 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-956309 --output=json --layout=cluster: exit status 2 (388.282947ms)

                                                
                                                
-- stdout --
	{"Name":"pause-956309","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-956309","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-956309 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-956309 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-956309 --alsologtostderr -v=5: (1.031864901s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-956309 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-956309 --alsologtostderr -v=5: (2.503182902s)
--- PASS: TestPause/serial/DeletePaused (2.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0914 17:43:09.765819    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.251066266s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-956309
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-956309: exit status 1 (16.043563ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-956309: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (45.800991389s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-572585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-572585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s4hxs" [ee5de383-1d0d-473b-905c-6dc6117d3cd8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s4hxs" [ee5de383-1d0d-473b-905c-6dc6117d3cd8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003846221s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-572585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (56.472905365s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gpg62" [2eb38e1b-7e29-49fd-bf09-2d0fc296a079] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005008616s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-572585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-572585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xbqcp" [ba3afa59-c9ef-428d-a5a8-fa8074bd2f09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xbqcp" [ba3afa59-c9ef-428d-a5a8-fa8074bd2f09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00334916s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-572585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (87.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m27.78188737s)
--- PASS: TestNetworkPlugins/group/calico/Start (87.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0914 17:47:23.735367    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:47:42.060353    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:47:45.632802    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m2.41989966s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-572585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-572585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kg6nr" [c76f2608-ddc6-4922-abd4-e54a2d1234dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kg6nr" [c76f2608-ddc6-4922-abd4-e54a2d1234dc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.003279398s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-572585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2h47n" [43d74614-4e2b-477e-ae28-c379eb292c25] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005092394s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-572585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-572585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hxzzf" [6ce16a3a-d543-4911-b26b-8ad6b6f9b57a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hxzzf" [6ce16a3a-d543-4911-b26b-8ad6b6f9b57a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004612398s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (48.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (48.966612351s)
--- PASS: TestNetworkPlugins/group/false/Start (48.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-572585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m18.894715612s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-572585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-572585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4k7t4" [7c3c4cb2-7e09-4f0e-a37f-358082b2c0cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4k7t4" [7c3c4cb2-7e09-4f0e-a37f-358082b2c0cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003526538s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-572585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0914 17:49:57.303112    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/auto-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:17.784715    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/auto-572585/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m1.188445357s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-572585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-572585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9jdw5" [bfc29bd5-4fac-494a-b546-5525c6557567] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9jdw5" [bfc29bd5-4fac-494a-b546-5525c6557567] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004425993s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-572585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tn5ts" [5c6e5c3b-b9c6-4933-84bb-982134752a4a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005371863s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0914 17:50:58.747017    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/auto-572585/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m23.558198251s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-572585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-572585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-trghd" [f668b362-3ac4-49d2-9a0c-1a70596f9140] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 17:51:03.724535    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:51:03.730978    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:51:03.742597    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:51:03.764158    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:51:03.805621    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:51:03.887854    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:51:04.049930    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:51:04.372932    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:51:05.015204    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:51:06.296970    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:51:08.858406    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-trghd" [f668b362-3ac4-49d2-9a0c-1a70596f9140] Running
E0914 17:51:13.980376    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.00458408s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-572585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (75.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0914 17:51:44.703077    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:52:20.668703    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/auto-572585/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-572585 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m15.272936793s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (75.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-572585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-572585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pwqgz" [e1063c6c-9ff5-4284-83ac-ae66ae75f8cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 17:52:23.735831    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:52:25.664850    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-pwqgz" [e1063c6c-9ff5-4284-83ac-ae66ae75f8cd] Running
E0914 17:52:28.707322    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004077221s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-572585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (180.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-343650 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0914 17:52:54.134961    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/custom-flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:52:56.698596    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/custom-flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-343650 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (3m0.506284876s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (180.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-572585 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-572585 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vwg6f" [ee871e26-c6fe-4739-b91d-06ed5f82dbc4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 17:53:01.821035    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/custom-flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vwg6f" [ee871e26-c6fe-4739-b91d-06ed5f82dbc4] Running
E0914 17:53:12.063626    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/custom-flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.004973366s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-572585 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-572585 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)
E0914 18:06:22.024611    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-391604 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 17:53:36.468992    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/calico-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:53:47.586199    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:53:56.950560    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/calico-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:05.127554    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:13.507282    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/custom-flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:19.387116    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:19.393497    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:19.404855    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:19.426260    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:19.467644    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:19.549114    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:19.711061    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:20.032776    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:20.674631    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:21.956665    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:24.519097    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:29.641343    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-391604 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (53.839002662s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-391604 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [17e329f8-957e-43de-b835-434c9fa65695] Pending
helpers_test.go:344: "busybox" [17e329f8-957e-43de-b835-434c9fa65695] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [17e329f8-957e-43de-b835-434c9fa65695] Running
E0914 17:54:36.807238    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/auto-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:37.912222    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/calico-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00462259s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-391604 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-391604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0914 17:54:39.883016    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-391604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.035531584s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-391604 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-391604 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-391604 --alsologtostderr -v=3: (10.89527172s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-391604 -n no-preload-391604
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-391604 -n no-preload-391604: exit status 7 (70.422218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-391604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (330.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-391604 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 17:55:00.364682    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:04.510083    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/auto-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:21.683299    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:21.689683    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:21.701307    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:21.722554    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:21.763954    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:21.845305    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:22.006691    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:22.328913    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:22.971000    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:24.252435    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:26.813918    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:31.936200    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:35.428947    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/custom-flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:41.326351    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:42.178365    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-391604 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (5m30.031078414s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-391604 -n no-preload-391604
E0914 18:00:21.682930    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (330.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-343650 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [32315572-f927-48c4-8228-6ea5cd25b423] Pending
E0914 17:55:54.869708    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:54.876162    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:54.887565    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:54.908957    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:54.950447    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:55.031994    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:55.193484    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [32315572-f927-48c4-8228-6ea5cd25b423] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 17:55:55.514963    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:56.156803    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:57.438768    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [32315572-f927-48c4-8228-6ea5cd25b423] Running
E0914 17:55:59.834337    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/calico-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:56:00.000857    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:56:02.660195    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.006294086s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-343650 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-343650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0914 17:56:03.725107    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-343650 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-343650 --alsologtostderr -v=3
E0914 17:56:05.122761    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:56:15.364069    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-343650 --alsologtostderr -v=3: (11.104952901s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-343650 -n old-k8s-version-343650
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-343650 -n old-k8s-version-343650: exit status 7 (69.605596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-343650 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (126.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-343650 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0914 17:56:31.428357    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:56:35.845657    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:56:43.621596    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:03.248856    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:06.804516    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:16.807491    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:22.348745    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:22.355062    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:22.366365    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:22.388268    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:22.429677    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:22.511133    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:22.672540    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:22.994397    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:23.636397    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:23.735134    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:24.918200    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:27.479935    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:32.601206    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:42.060202    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:42.842953    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:45.632379    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:51.565726    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/custom-flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:59.091132    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:59.097827    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:59.109190    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:59.130667    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:59.172052    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:59.253478    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:59.415132    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:57:59.737142    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:58:00.381484    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:58:01.663473    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:58:03.324580    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:58:04.225391    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:58:05.543883    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:58:09.347611    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:58:15.974902    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/calico-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:58:19.271542    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/custom-flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:58:19.589216    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-343650 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m6.177254171s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-343650 -n old-k8s-version-343650
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (126.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4qxvd" [fd8ce33e-be62-4948-80ec-a5fdb8ca39b3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004804699s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4qxvd" [fd8ce33e-be62-4948-80ec-a5fdb8ca39b3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003861893s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-343650 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-343650 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-343650 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-343650 -n old-k8s-version-343650
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-343650 -n old-k8s-version-343650: exit status 2 (335.449137ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-343650 -n old-k8s-version-343650
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-343650 -n old-k8s-version-343650: exit status 2 (357.256452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-343650 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-343650 -n old-k8s-version-343650
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-343650 -n old-k8s-version-343650
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (68.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-790990 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 17:58:40.071408    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:58:43.676103    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/calico-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:58:44.286342    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:59:19.386598    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:59:21.033112    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:59:36.807532    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/auto-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:59:47.090716    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-790990 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m8.157188393s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (68.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-790990 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e5dd08a7-7110-4567-a268-18b65a631b99] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e5dd08a7-7110-4567-a268-18b65a631b99] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003828984s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-790990 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-790990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-790990 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.279556595s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-790990 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-790990 --alsologtostderr -v=3
E0914 18:00:06.208200    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-790990 --alsologtostderr -v=3: (11.146348402s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790990 -n embed-certs-790990
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790990 -n embed-certs-790990: exit status 7 (126.858901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-790990 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-790990 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-790990 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.193802419s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-790990 -n embed-certs-790990
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dblvs" [18dfc78c-1c93-46b9-a871-f98bfd79f1a3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004393944s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dblvs" [18dfc78c-1c93-46b9-a871-f98bfd79f1a3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005522315s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-391604 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-391604 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-391604 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-391604 -n no-preload-391604
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-391604 -n no-preload-391604: exit status 2 (396.734428ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-391604 -n no-preload-391604
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-391604 -n no-preload-391604: exit status 2 (452.353269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-391604 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-391604 -n no-preload-391604
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-391604 -n no-preload-391604
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-044702 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 18:00:42.955426    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:49.385958    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:54.322441    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:54.328851    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:54.340254    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:54.361644    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:54.403143    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:54.484515    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:54.645967    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:54.870048    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:54.967496    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:55.609782    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:56.891807    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:00:59.453780    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:01:03.724889    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:01:04.576071    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:01:14.817913    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:01:22.570279    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:01:35.299306    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-044702 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m15.886675937s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-044702 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6fdb829e-efad-4293-9119-117d8b86bbd9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6fdb829e-efad-4293-9119-117d8b86bbd9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003987353s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-044702 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-044702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-044702 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-044702 --alsologtostderr -v=3
E0914 18:02:16.260837    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-044702 --alsologtostderr -v=3: (10.932963494s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-044702 -n default-k8s-diff-port-044702
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-044702 -n default-k8s-diff-port-044702: exit status 7 (76.193907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-044702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-044702 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 18:02:22.348356    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:02:23.734907    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/functional-895781/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:02:42.060827    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/skaffold-689497/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:02:45.632858    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/addons-522792/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:02:50.049724    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/bridge-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:02:51.565197    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/custom-flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:02:59.091862    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:03:15.974266    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/calico-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:03:26.797148    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubenet-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:03:38.182795    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:19.386557    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/false-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:29.968172    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:29.974542    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:29.985930    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:30.008643    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:30.050970    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:30.132905    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:30.294577    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:30.616567    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:31.257910    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:32.539421    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:35.101497    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:04:36.807525    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/auto-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-044702 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.374403935s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-044702 -n default-k8s-diff-port-044702
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vtqsf" [5c44b210-dee1-4430-8c61-3fcdb4a53634] Running
E0914 18:04:40.223501    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004176806s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vtqsf" [5c44b210-dee1-4430-8c61-3fcdb4a53634] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003342667s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-790990 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-790990 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-790990 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790990 -n embed-certs-790990
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790990 -n embed-certs-790990: exit status 2 (343.16094ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-790990 -n embed-certs-790990
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-790990 -n embed-certs-790990: exit status 2 (357.385179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-790990 --alsologtostderr -v=1
E0914 18:04:50.465442    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-790990 -n embed-certs-790990
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-790990 -n embed-certs-790990
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-724898 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 18:05:10.947225    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:05:21.683295    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/enable-default-cni-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-724898 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (42.937453186s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-724898 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-724898 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.16360075s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-724898 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-724898 --alsologtostderr -v=3: (8.60687588s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-724898 -n newest-cni-724898
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-724898 -n newest-cni-724898: exit status 7 (74.62529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-724898 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-724898 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 18:05:51.909037    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/no-preload-391604/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:05:54.322511    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/old-k8s-version-343650/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:05:54.869803    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/flannel-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:05:59.871944    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/auto-572585/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:06:03.725116    7537 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kindnet-572585/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-724898 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (18.32258223s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-724898 -n newest-cni-724898
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-724898 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-724898 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-724898 -n newest-cni-724898
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-724898 -n newest-cni-724898: exit status 2 (337.136379ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-724898 -n newest-cni-724898
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-724898 -n newest-cni-724898: exit status 2 (345.992231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-724898 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-724898 -n newest-cni-724898
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-724898 -n newest-cni-724898
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mqd8h" [12050e9c-8728-4c54-b2f8-3a3a95e9a23f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003544991s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mqd8h" [12050e9c-8728-4c54-b2f8-3a3a95e9a23f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004051558s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-044702 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-044702 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-044702 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-044702 -n default-k8s-diff-port-044702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-044702 -n default-k8s-diff-port-044702: exit status 2 (319.311002ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-044702 -n default-k8s-diff-port-044702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-044702 -n default-k8s-diff-port-044702: exit status 2 (323.266412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-044702 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-044702 -n default-k8s-diff-port-044702
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-044702 -n default-k8s-diff-port-044702
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.78s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-909320 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-909320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-909320
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-572585 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-572585" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19643-2222/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 17:36:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-693102
contexts:
- context:
cluster: kubernetes-upgrade-693102
user: kubernetes-upgrade-693102
name: kubernetes-upgrade-693102
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-693102
user:
client-certificate: /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubernetes-upgrade-693102/client.crt
client-key: /home/jenkins/minikube-integration/19643-2222/.minikube/profiles/kubernetes-upgrade-693102/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-572585

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-572585" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572585"

                                                
                                                
----------------------- debugLogs end: cilium-572585 [took: 4.078798292s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-572585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-572585
--- SKIP: TestNetworkPlugins/group/cilium (4.22s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-128629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-128629
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard