Test Report: Docker_Linux_docker_arm64 19690

                    
                      f8db61c9b74e1fc8d4208c01add19855c5953b45:2024-09-23:36339
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 74.69
x
+
TestAddons/parallel/Registry (74.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 9.950688ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-tgghm" [ec93b34f-db00-4bde-8ed0-46a67564f5cc] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003613597s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tf8z6" [7b435d50-4b55-4c70-b6d9-b0e1fd522370] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00389476s
addons_test.go:338: (dbg) Run:  kubectl --context addons-816293 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-816293 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-816293 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.158040303s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-816293 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 ip
2024/09/23 13:23:32 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-816293
helpers_test.go:235: (dbg) docker inspect addons-816293:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc",
	        "Created": "2024-09-23T13:10:18.540481566Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 721430,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T13:10:18.678365482Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc/hosts",
	        "LogPath": "/var/lib/docker/containers/2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc/2cb365819d999761da514a28f0db1a8ebcc8ea3a0f82b4287e409b8e04632dbc-json.log",
	        "Name": "/addons-816293",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-816293:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-816293",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2b52daefb80c4c5651c0b08e78d9e1243c06d774ec762c66681c0f9a6849359a-init/diff:/var/lib/docker/overlay2/fce1ff641bd7a248af78be64b9f17f07383efee2fce882f3a641b971f5d14d46/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2b52daefb80c4c5651c0b08e78d9e1243c06d774ec762c66681c0f9a6849359a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2b52daefb80c4c5651c0b08e78d9e1243c06d774ec762c66681c0f9a6849359a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2b52daefb80c4c5651c0b08e78d9e1243c06d774ec762c66681c0f9a6849359a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-816293",
	                "Source": "/var/lib/docker/volumes/addons-816293/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-816293",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-816293",
	                "name.minikube.sigs.k8s.io": "addons-816293",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13ba91a7a3b5a4284e99f463bc9345eac2887bfbbacb3d524406d1c75694d419",
	            "SandboxKey": "/var/run/docker/netns/13ba91a7a3b5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-816293": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "52fe0ca6caebd72d934c58931acd354ed76e05ac5c97464262d266155e0634b4",
	                    "EndpointID": "76474860bb3303c67977778a0a934a7357992a6a12034b30f5fd27b16c81e85d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-816293",
	                        "2cb365819d99"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-816293 -n addons-816293
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-816293 logs -n 25: (1.202549734s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-223839   | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC |                     |
	|         | -p download-only-223839                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
	| delete  | -p download-only-223839                                                                     | download-only-223839   | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
	| start   | -o=json --download-only                                                                     | download-only-136397   | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC |                     |
	|         | -p download-only-136397                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
	| delete  | -p download-only-136397                                                                     | download-only-136397   | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
	| delete  | -p download-only-223839                                                                     | download-only-223839   | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
	| delete  | -p download-only-136397                                                                     | download-only-136397   | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
	| start   | --download-only -p                                                                          | download-docker-126922 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC |                     |
	|         | download-docker-126922                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-126922                                                                   | download-docker-126922 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-953246   | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC |                     |
	|         | binary-mirror-953246                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39347                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-953246                                                                     | binary-mirror-953246   | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
	| addons  | disable dashboard -p                                                                        | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC |                     |
	|         | addons-816293                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC |                     |
	|         | addons-816293                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-816293 --wait=true                                                                | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-816293 addons disable                                                                | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:14 UTC | 23 Sep 24 13:14 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:22 UTC | 23 Sep 24 13:22 UTC |
	|         | -p addons-816293                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-816293 addons disable                                                                | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:22 UTC | 23 Sep 24 13:22 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-816293 addons disable                                                                | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:22 UTC | 23 Sep 24 13:22 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:22 UTC | 23 Sep 24 13:22 UTC |
	|         | -p addons-816293                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-816293 ssh cat                                                                       | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	|         | /opt/local-path-provisioner/pvc-3f2fcd29-74af-42b3-bac1-c6876ced45a4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-816293 addons disable                                                                | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-816293 ip                                                                            | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	| addons  | addons-816293 addons disable                                                                | addons-816293          | jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:23 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:09:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:09:53.885694  720939 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:09:53.885836  720939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:09:53.885847  720939 out.go:358] Setting ErrFile to fd 2...
	I0923 13:09:53.885853  720939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:09:53.886116  720939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
	I0923 13:09:53.886582  720939 out.go:352] Setting JSON to false
	I0923 13:09:53.887428  720939 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10342,"bootTime":1727086652,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0923 13:09:53.887509  720939 start.go:139] virtualization:  
	I0923 13:09:53.889712  720939 out.go:177] * [addons-816293] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:09:53.891804  720939 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:09:53.891976  720939 notify.go:220] Checking for updates...
	I0923 13:09:53.895295  720939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:09:53.897006  720939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig
	I0923 13:09:53.898864  720939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube
	I0923 13:09:53.900533  720939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:09:53.902274  720939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:09:53.904259  720939 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:09:53.933085  720939 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:09:53.933224  720939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:09:53.990778  720939 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:09:53.981675184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:09:53.990899  720939 docker.go:318] overlay module found
	I0923 13:09:53.992830  720939 out.go:177] * Using the docker driver based on user configuration
	I0923 13:09:53.994315  720939 start.go:297] selected driver: docker
	I0923 13:09:53.994333  720939 start.go:901] validating driver "docker" against <nil>
	I0923 13:09:53.994348  720939 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:09:53.995010  720939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:09:54.052750  720939 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:09:54.043031055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:09:54.053010  720939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:09:54.053282  720939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:09:54.055174  720939 out.go:177] * Using Docker driver with root privileges
	I0923 13:09:54.056860  720939 cni.go:84] Creating CNI manager for ""
	I0923 13:09:54.057074  720939 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 13:09:54.057092  720939 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 13:09:54.057205  720939 start.go:340] cluster config:
	{Name:addons-816293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-816293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:09:54.059348  720939 out.go:177] * Starting "addons-816293" primary control-plane node in "addons-816293" cluster
	I0923 13:09:54.061046  720939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 13:09:54.062855  720939 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:09:54.064557  720939 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:09:54.064637  720939 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 13:09:54.064648  720939 cache.go:56] Caching tarball of preloaded images
	I0923 13:09:54.064562  720939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:09:54.064751  720939 preload.go:172] Found /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 13:09:54.064762  720939 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 13:09:54.065261  720939 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/config.json ...
	I0923 13:09:54.065300  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/config.json: {Name:mkb1a0f55dddf93747091075b7c9989144106a78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:09:54.080717  720939 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:09:54.080849  720939 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 13:09:54.080875  720939 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 13:09:54.080884  720939 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 13:09:54.080913  720939 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 13:09:54.080925  720939 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 13:10:11.612869  720939 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 13:10:11.612907  720939 cache.go:194] Successfully downloaded all kic artifacts
	I0923 13:10:11.612957  720939 start.go:360] acquireMachinesLock for addons-816293: {Name:mkdca502684789b9579f34074a545d39dc0069d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:10:11.613679  720939 start.go:364] duration metric: took 692.309µs to acquireMachinesLock for "addons-816293"
	I0923 13:10:11.613720  720939 start.go:93] Provisioning new machine with config: &{Name:addons-816293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-816293 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 13:10:11.613800  720939 start.go:125] createHost starting for "" (driver="docker")
	I0923 13:10:11.617016  720939 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 13:10:11.617273  720939 start.go:159] libmachine.API.Create for "addons-816293" (driver="docker")
	I0923 13:10:11.617310  720939 client.go:168] LocalClient.Create starting
	I0923 13:10:11.617438  720939 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem
	I0923 13:10:12.072567  720939 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/cert.pem
	I0923 13:10:12.560564  720939 cli_runner.go:164] Run: docker network inspect addons-816293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 13:10:12.575644  720939 cli_runner.go:211] docker network inspect addons-816293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 13:10:12.575733  720939 network_create.go:284] running [docker network inspect addons-816293] to gather additional debugging logs...
	I0923 13:10:12.575757  720939 cli_runner.go:164] Run: docker network inspect addons-816293
	W0923 13:10:12.590943  720939 cli_runner.go:211] docker network inspect addons-816293 returned with exit code 1
	I0923 13:10:12.590975  720939 network_create.go:287] error running [docker network inspect addons-816293]: docker network inspect addons-816293: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-816293 not found
	I0923 13:10:12.590996  720939 network_create.go:289] output of [docker network inspect addons-816293]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-816293 not found
	
	** /stderr **
	I0923 13:10:12.591095  720939 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:10:12.606292  720939 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016fa9d0}
	I0923 13:10:12.606335  720939 network_create.go:124] attempt to create docker network addons-816293 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 13:10:12.606393  720939 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-816293 addons-816293
	I0923 13:10:12.672321  720939 network_create.go:108] docker network addons-816293 192.168.49.0/24 created
	I0923 13:10:12.672350  720939 kic.go:121] calculated static IP "192.168.49.2" for the "addons-816293" container
	I0923 13:10:12.672435  720939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 13:10:12.687929  720939 cli_runner.go:164] Run: docker volume create addons-816293 --label name.minikube.sigs.k8s.io=addons-816293 --label created_by.minikube.sigs.k8s.io=true
	I0923 13:10:12.705673  720939 oci.go:103] Successfully created a docker volume addons-816293
	I0923 13:10:12.705766  720939 cli_runner.go:164] Run: docker run --rm --name addons-816293-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-816293 --entrypoint /usr/bin/test -v addons-816293:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 13:10:14.760929  720939 cli_runner.go:217] Completed: docker run --rm --name addons-816293-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-816293 --entrypoint /usr/bin/test -v addons-816293:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.055120426s)
	I0923 13:10:14.760983  720939 oci.go:107] Successfully prepared a docker volume addons-816293
	I0923 13:10:14.761002  720939 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:10:14.761022  720939 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 13:10:14.761086  720939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-816293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 13:10:18.470478  720939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-816293:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.709347415s)
	I0923 13:10:18.470512  720939 kic.go:203] duration metric: took 3.709486868s to extract preloaded images to volume ...
	W0923 13:10:18.470651  720939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 13:10:18.470771  720939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 13:10:18.526038  720939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-816293 --name addons-816293 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-816293 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-816293 --network addons-816293 --ip 192.168.49.2 --volume addons-816293:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 13:10:18.862023  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Running}}
	I0923 13:10:18.888667  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:18.911529  720939 cli_runner.go:164] Run: docker exec addons-816293 stat /var/lib/dpkg/alternatives/iptables
	I0923 13:10:18.980838  720939 oci.go:144] the created container "addons-816293" has a running status.
	I0923 13:10:18.980873  720939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa...
	I0923 13:10:19.869841  720939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 13:10:19.896704  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:19.915359  720939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 13:10:19.915378  720939 kic_runner.go:114] Args: [docker exec --privileged addons-816293 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 13:10:19.985971  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:20.003533  720939 machine.go:93] provisionDockerMachine start ...
	I0923 13:10:20.003651  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:20.034660  720939 main.go:141] libmachine: Using SSH client type: native
	I0923 13:10:20.035077  720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0923 13:10:20.035104  720939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:10:20.168749  720939 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-816293
	
	I0923 13:10:20.168790  720939 ubuntu.go:169] provisioning hostname "addons-816293"
	I0923 13:10:20.168880  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:20.187171  720939 main.go:141] libmachine: Using SSH client type: native
	I0923 13:10:20.187420  720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0923 13:10:20.187439  720939 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-816293 && echo "addons-816293" | sudo tee /etc/hostname
	I0923 13:10:20.333296  720939 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-816293
	
	I0923 13:10:20.333437  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:20.350731  720939 main.go:141] libmachine: Using SSH client type: native
	I0923 13:10:20.351003  720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0923 13:10:20.351026  720939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-816293' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-816293/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-816293' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:10:20.484996  720939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:10:20.485030  720939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19690-714802/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-714802/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-714802/.minikube}
	I0923 13:10:20.485054  720939 ubuntu.go:177] setting up certificates
	I0923 13:10:20.485065  720939 provision.go:84] configureAuth start
	I0923 13:10:20.485126  720939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-816293
	I0923 13:10:20.502106  720939 provision.go:143] copyHostCerts
	I0923 13:10:20.502186  720939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-714802/.minikube/ca.pem (1078 bytes)
	I0923 13:10:20.502308  720939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-714802/.minikube/cert.pem (1123 bytes)
	I0923 13:10:20.502371  720939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-714802/.minikube/key.pem (1675 bytes)
	I0923 13:10:20.502421  720939 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-714802/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca-key.pem org=jenkins.addons-816293 san=[127.0.0.1 192.168.49.2 addons-816293 localhost minikube]
	I0923 13:10:20.936599  720939 provision.go:177] copyRemoteCerts
	I0923 13:10:20.936671  720939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:10:20.936722  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:20.952754  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:21.054230  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:10:21.080631  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 13:10:21.105849  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:10:21.131556  720939 provision.go:87] duration metric: took 646.475864ms to configureAuth
	I0923 13:10:21.131584  720939 ubuntu.go:193] setting minikube options for container-runtime
	I0923 13:10:21.131779  720939 config.go:182] Loaded profile config "addons-816293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:10:21.131846  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:21.148230  720939 main.go:141] libmachine: Using SSH client type: native
	I0923 13:10:21.148493  720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0923 13:10:21.148515  720939 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 13:10:21.281722  720939 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 13:10:21.281795  720939 ubuntu.go:71] root file system type: overlay
	I0923 13:10:21.281922  720939 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 13:10:21.281997  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:21.299391  720939 main.go:141] libmachine: Using SSH client type: native
	I0923 13:10:21.299659  720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0923 13:10:21.299746  720939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 13:10:21.446255  720939 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 13:10:21.446401  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:21.464616  720939 main.go:141] libmachine: Using SSH client type: native
	I0923 13:10:21.464864  720939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33528 <nil> <nil>}
	I0923 13:10:21.464882  720939 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 13:10:22.251146  720939 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:16.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-23 13:10:21.440696641 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0923 13:10:22.251180  720939 machine.go:96] duration metric: took 2.24762702s to provisionDockerMachine
	I0923 13:10:22.251192  720939 client.go:171] duration metric: took 10.633872078s to LocalClient.Create
	I0923 13:10:22.251205  720939 start.go:167] duration metric: took 10.633933681s to libmachine.API.Create "addons-816293"
	I0923 13:10:22.251213  720939 start.go:293] postStartSetup for "addons-816293" (driver="docker")
	I0923 13:10:22.251224  720939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:10:22.251294  720939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:10:22.251338  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:22.268312  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:22.363759  720939 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:10:22.367277  720939 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 13:10:22.367316  720939 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 13:10:22.367328  720939 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 13:10:22.367335  720939 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 13:10:22.367345  720939 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-714802/.minikube/addons for local assets ...
	I0923 13:10:22.367418  720939 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-714802/.minikube/files for local assets ...
	I0923 13:10:22.367447  720939 start.go:296] duration metric: took 116.227236ms for postStartSetup
	I0923 13:10:22.367760  720939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-816293
	I0923 13:10:22.384260  720939 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/config.json ...
	I0923 13:10:22.384542  720939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:10:22.384593  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:22.401121  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:22.493463  720939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 13:10:22.497713  720939 start.go:128] duration metric: took 10.883898048s to createHost
	I0923 13:10:22.497740  720939 start.go:83] releasing machines lock for "addons-816293", held for 10.884042104s
	I0923 13:10:22.497845  720939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-816293
	I0923 13:10:22.514262  720939 ssh_runner.go:195] Run: cat /version.json
	I0923 13:10:22.514315  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:22.514377  720939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:10:22.514459  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:22.532315  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:22.539857  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:22.624559  720939 ssh_runner.go:195] Run: systemctl --version
	I0923 13:10:22.755452  720939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:10:22.759660  720939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 13:10:22.784001  720939 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 13:10:22.784085  720939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:10:22.813434  720939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 13:10:22.813459  720939 start.go:495] detecting cgroup driver to use...
	I0923 13:10:22.813496  720939 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 13:10:22.813596  720939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:10:22.830343  720939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 13:10:22.840146  720939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 13:10:22.850405  720939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 13:10:22.850475  720939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 13:10:22.860761  720939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:10:22.870877  720939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 13:10:22.881161  720939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:10:22.891331  720939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:10:22.900493  720939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 13:10:22.910332  720939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 13:10:22.919772  720939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 13:10:22.929603  720939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:10:22.937959  720939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:10:22.946316  720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:10:23.025588  720939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 13:10:23.116474  720939 start.go:495] detecting cgroup driver to use...
	I0923 13:10:23.116532  720939 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 13:10:23.116594  720939 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 13:10:23.135477  720939 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 13:10:23.135557  720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:10:23.148835  720939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:10:23.167243  720939 ssh_runner.go:195] Run: which cri-dockerd
	I0923 13:10:23.171873  720939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 13:10:23.181919  720939 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 13:10:23.202728  720939 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 13:10:23.312562  720939 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 13:10:23.413833  720939 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 13:10:23.413994  720939 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 13:10:23.434193  720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:10:23.527544  720939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 13:10:23.795685  720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 13:10:23.808571  720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:10:23.821252  720939 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 13:10:23.906189  720939 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 13:10:23.989692  720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:10:24.085184  720939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 13:10:24.100608  720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:10:24.112996  720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:10:24.202966  720939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 13:10:24.276610  720939 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 13:10:24.276701  720939 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 13:10:24.280696  720939 start.go:563] Will wait 60s for crictl version
	I0923 13:10:24.280764  720939 ssh_runner.go:195] Run: which crictl
	I0923 13:10:24.284726  720939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:10:24.320025  720939 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 13:10:24.320096  720939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:10:24.343422  720939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:10:24.368709  720939 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 13:10:24.368810  720939 cli_runner.go:164] Run: docker network inspect addons-816293 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:10:24.383883  720939 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 13:10:24.387438  720939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:10:24.397959  720939 kubeadm.go:883] updating cluster {Name:addons-816293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-816293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:10:24.398077  720939 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:10:24.398147  720939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 13:10:24.415395  720939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 13:10:24.415415  720939 docker.go:615] Images already preloaded, skipping extraction
	I0923 13:10:24.415477  720939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 13:10:24.434191  720939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 13:10:24.434217  720939 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:10:24.434229  720939 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0923 13:10:24.434325  720939 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-816293 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-816293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:10:24.434394  720939 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 13:10:24.476644  720939 cni.go:84] Creating CNI manager for ""
	I0923 13:10:24.476675  720939 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 13:10:24.476686  720939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:10:24.476706  720939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-816293 NodeName:addons-816293 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:10:24.476861  720939 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-816293"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:10:24.476935  720939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:10:24.486119  720939 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:10:24.486204  720939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:10:24.495119  720939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 13:10:24.513069  720939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:10:24.530819  720939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0923 13:10:24.549004  720939 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 13:10:24.552427  720939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:10:24.562847  720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:10:24.647764  720939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:10:24.662329  720939 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293 for IP: 192.168.49.2
	I0923 13:10:24.662362  720939 certs.go:194] generating shared ca certs ...
	I0923 13:10:24.662378  720939 certs.go:226] acquiring lock for ca certs: {Name:mk527b93d9674c57825754d278442fd54dec1acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:24.662589  720939 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-714802/.minikube/ca.key
	I0923 13:10:25.187270  720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-714802/.minikube/ca.crt ...
	I0923 13:10:25.187303  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/ca.crt: {Name:mk59fe7ff27825d0b3e1b83df770cf8e994653de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:25.187576  720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-714802/.minikube/ca.key ...
	I0923 13:10:25.187594  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/ca.key: {Name:mk5ab979687a32aa82781efad074c75a1a3ef4ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:25.187717  720939 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.key
	I0923 13:10:25.724045  720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.crt ...
	I0923 13:10:25.724075  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.crt: {Name:mk61a2cf52e2f511aa7a57cfe7b8f0edcef0198a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:25.724272  720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.key ...
	I0923 13:10:25.724287  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.key: {Name:mk20e6309a58973809fa54cdff3588c828d810ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:25.724897  720939 certs.go:256] generating profile certs ...
	I0923 13:10:25.725018  720939 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.key
	I0923 13:10:25.725042  720939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt with IP's: []
	I0923 13:10:26.553938  720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt ...
	I0923 13:10:26.553979  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: {Name:mkd3101b334a8f1113e6f94ea9272ae499d0bd02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:26.554179  720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.key ...
	I0923 13:10:26.554192  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.key: {Name:mk7d8f718c134d5f973f5e94b9ebc740f2282c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:26.554273  720939 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key.422867d5
	I0923 13:10:26.554297  720939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt.422867d5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 13:10:26.862110  720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt.422867d5 ...
	I0923 13:10:26.862136  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt.422867d5: {Name:mk95eba5a1add8e5a1494e5cfa31b736f2af1bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:26.862302  720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key.422867d5 ...
	I0923 13:10:26.862310  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key.422867d5: {Name:mk54f663391840dea7d94377a1d53565a226a2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:26.862381  720939 certs.go:381] copying /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt.422867d5 -> /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt
	I0923 13:10:26.862457  720939 certs.go:385] copying /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key.422867d5 -> /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key
	I0923 13:10:26.862504  720939 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.key
	I0923 13:10:26.862518  720939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.crt with IP's: []
	I0923 13:10:27.145865  720939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.crt ...
	I0923 13:10:27.145917  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.crt: {Name:mkc7dfe1f891e2289ffd7a80fb069c53bd37ea36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:27.146111  720939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.key ...
	I0923 13:10:27.146126  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.key: {Name:mkde5093e6a75d02720ab44a10ee057d2ec0c779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:27.146320  720939 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 13:10:27.146367  720939 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/ca.pem (1078 bytes)
	I0923 13:10:27.146401  720939 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:10:27.146426  720939 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-714802/.minikube/certs/key.pem (1675 bytes)
	I0923 13:10:27.147075  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:10:27.173602  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:10:27.199598  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:10:27.224753  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:10:27.249401  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 13:10:27.274205  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 13:10:27.299033  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:10:27.323528  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 13:10:27.347685  720939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-714802/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:10:27.372613  720939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:10:27.391037  720939 ssh_runner.go:195] Run: openssl version
	I0923 13:10:27.396578  720939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:10:27.406529  720939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:10:27.409995  720939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 13:10 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:10:27.410070  720939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:10:27.417098  720939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:10:27.426571  720939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:10:27.429939  720939 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:10:27.430026  720939 kubeadm.go:392] StartCluster: {Name:addons-816293 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-816293 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:10:27.430167  720939 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 13:10:27.447994  720939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 13:10:27.457233  720939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 13:10:27.466532  720939 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 13:10:27.466599  720939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 13:10:27.475913  720939 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:10:27.475935  720939 kubeadm.go:157] found existing configuration files:
	
	I0923 13:10:27.476014  720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 13:10:27.484737  720939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:10:27.484832  720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 13:10:27.493780  720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 13:10:27.503122  720939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:10:27.503193  720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 13:10:27.511851  720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 13:10:27.520888  720939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:10:27.520996  720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 13:10:27.529507  720939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 13:10:27.538462  720939 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:10:27.538535  720939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 13:10:27.547634  720939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 13:10:27.591608  720939 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 13:10:27.591908  720939 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 13:10:27.623613  720939 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 13:10:27.623775  720939 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 13:10:27.623842  720939 kubeadm.go:310] OS: Linux
	I0923 13:10:27.623918  720939 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 13:10:27.624005  720939 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 13:10:27.624087  720939 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 13:10:27.624173  720939 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 13:10:27.624257  720939 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 13:10:27.624341  720939 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 13:10:27.624422  720939 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 13:10:27.624505  720939 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 13:10:27.624587  720939 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 13:10:27.702832  720939 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 13:10:27.703016  720939 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 13:10:27.703151  720939 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 13:10:27.715274  720939 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 13:10:27.719247  720939 out.go:235]   - Generating certificates and keys ...
	I0923 13:10:27.719484  720939 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 13:10:27.719602  720939 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 13:10:28.382662  720939 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 13:10:28.733735  720939 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 13:10:29.634765  720939 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 13:10:29.914615  720939 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 13:10:30.523225  720939 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 13:10:30.523593  720939 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-816293 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 13:10:30.701156  720939 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 13:10:30.701511  720939 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-816293 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 13:10:30.980320  720939 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 13:10:31.397219  720939 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 13:10:31.692868  720939 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 13:10:31.693130  720939 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 13:10:32.304993  720939 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 13:10:33.410248  720939 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 13:10:34.376376  720939 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 13:10:34.986171  720939 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 13:10:35.429310  720939 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 13:10:35.430051  720939 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 13:10:35.433006  720939 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 13:10:35.435260  720939 out.go:235]   - Booting up control plane ...
	I0923 13:10:35.435373  720939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 13:10:35.435449  720939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 13:10:35.436168  720939 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 13:10:35.448832  720939 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:10:35.455814  720939 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:10:35.455874  720939 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 13:10:35.562845  720939 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 13:10:35.562990  720939 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:10:36.563769  720939 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000909154s
	I0923 13:10:36.563857  720939 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 13:10:42.565732  720939 kubeadm.go:310] [api-check] The API server is healthy after 6.001922516s
	I0923 13:10:42.590107  720939 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 13:10:42.608798  720939 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 13:10:42.630773  720939 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 13:10:42.630978  720939 kubeadm.go:310] [mark-control-plane] Marking the node addons-816293 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 13:10:42.643037  720939 kubeadm.go:310] [bootstrap-token] Using token: jue7t8.ifonfcbdzs91nmi7
	I0923 13:10:42.645240  720939 out.go:235]   - Configuring RBAC rules ...
	I0923 13:10:42.645378  720939 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 13:10:42.651732  720939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 13:10:42.659630  720939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 13:10:42.663553  720939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 13:10:42.667443  720939 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 13:10:42.671027  720939 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 13:10:42.976818  720939 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 13:10:43.403104  720939 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 13:10:43.978120  720939 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 13:10:43.979217  720939 kubeadm.go:310] 
	I0923 13:10:43.979307  720939 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 13:10:43.979320  720939 kubeadm.go:310] 
	I0923 13:10:43.979396  720939 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 13:10:43.979405  720939 kubeadm.go:310] 
	I0923 13:10:43.979430  720939 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 13:10:43.979492  720939 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 13:10:43.979549  720939 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 13:10:43.979566  720939 kubeadm.go:310] 
	I0923 13:10:43.979635  720939 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 13:10:43.979647  720939 kubeadm.go:310] 
	I0923 13:10:43.979695  720939 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 13:10:43.979702  720939 kubeadm.go:310] 
	I0923 13:10:43.979754  720939 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 13:10:43.979835  720939 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 13:10:43.979910  720939 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 13:10:43.979919  720939 kubeadm.go:310] 
	I0923 13:10:43.980009  720939 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 13:10:43.980091  720939 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 13:10:43.980099  720939 kubeadm.go:310] 
	I0923 13:10:43.980196  720939 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jue7t8.ifonfcbdzs91nmi7 \
	I0923 13:10:43.980313  720939 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:55443aee913122bbe6356d8284b0f4f2215d82633d1715094eaa306e6aa2be51 \
	I0923 13:10:43.980343  720939 kubeadm.go:310] 	--control-plane 
	I0923 13:10:43.980350  720939 kubeadm.go:310] 
	I0923 13:10:43.980433  720939 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 13:10:43.980443  720939 kubeadm.go:310] 
	I0923 13:10:43.980527  720939 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jue7t8.ifonfcbdzs91nmi7 \
	I0923 13:10:43.980643  720939 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:55443aee913122bbe6356d8284b0f4f2215d82633d1715094eaa306e6aa2be51 
	I0923 13:10:43.984850  720939 kubeadm.go:310] W0923 13:10:27.587369    1812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:10:43.985167  720939 kubeadm.go:310] W0923 13:10:27.588805    1812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:10:43.985385  720939 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 13:10:43.985490  720939 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:10:43.985509  720939 cni.go:84] Creating CNI manager for ""
	I0923 13:10:43.985524  720939 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 13:10:43.989382  720939 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 13:10:43.991510  720939 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 13:10:44.000322  720939 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 13:10:44.027941  720939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 13:10:44.028087  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:10:44.028184  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-816293 minikube.k8s.io/updated_at=2024_09_23T13_10_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-816293 minikube.k8s.io/primary=true
	I0923 13:10:44.181663  720939 ops.go:34] apiserver oom_adj: -16
	I0923 13:10:44.181818  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:10:44.682249  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:10:45.182648  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:10:45.682511  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:10:46.182693  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:10:46.681862  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:10:47.182820  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:10:47.682454  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:10:48.182220  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:10:48.681899  720939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:10:48.812954  720939 kubeadm.go:1113] duration metric: took 4.784905874s to wait for elevateKubeSystemPrivileges
	I0923 13:10:48.812992  720939 kubeadm.go:394] duration metric: took 21.383000135s to StartCluster
	I0923 13:10:48.813009  720939 settings.go:142] acquiring lock: {Name:mke1d97646bb6c4928996b4a93e7bcff38158bd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:48.813113  720939 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-714802/kubeconfig
	I0923 13:10:48.813480  720939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-714802/kubeconfig: {Name:mk0b3bc0004539087df2d1e8d84176d4090fd8e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:48.813677  720939 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 13:10:48.813781  720939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 13:10:48.814021  720939 config.go:182] Loaded profile config "addons-816293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:10:48.814056  720939 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 13:10:48.814131  720939 addons.go:69] Setting yakd=true in profile "addons-816293"
	I0923 13:10:48.814149  720939 addons.go:234] Setting addon yakd=true in "addons-816293"
	I0923 13:10:48.814171  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.814664  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.815129  720939 addons.go:69] Setting cloud-spanner=true in profile "addons-816293"
	I0923 13:10:48.815136  720939 addons.go:69] Setting metrics-server=true in profile "addons-816293"
	I0923 13:10:48.815149  720939 addons.go:234] Setting addon cloud-spanner=true in "addons-816293"
	I0923 13:10:48.815158  720939 addons.go:234] Setting addon metrics-server=true in "addons-816293"
	I0923 13:10:48.815173  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.815183  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.815588  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.815611  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.816044  720939 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-816293"
	I0923 13:10:48.816069  720939 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-816293"
	I0923 13:10:48.816096  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.816520  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.819646  720939 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-816293"
	I0923 13:10:48.819725  720939 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-816293"
	I0923 13:10:48.819756  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.820209  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.829068  720939 addons.go:69] Setting registry=true in profile "addons-816293"
	I0923 13:10:48.829109  720939 addons.go:234] Setting addon registry=true in "addons-816293"
	I0923 13:10:48.829151  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.829649  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.831694  720939 addons.go:69] Setting default-storageclass=true in profile "addons-816293"
	I0923 13:10:48.831731  720939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-816293"
	I0923 13:10:48.832062  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.832216  720939 addons.go:69] Setting storage-provisioner=true in profile "addons-816293"
	I0923 13:10:48.832231  720939 addons.go:234] Setting addon storage-provisioner=true in "addons-816293"
	I0923 13:10:48.832258  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.832731  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.849046  720939 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-816293"
	I0923 13:10:48.849086  720939 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-816293"
	I0923 13:10:48.849431  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.855572  720939 addons.go:69] Setting gcp-auth=true in profile "addons-816293"
	I0923 13:10:48.855609  720939 mustload.go:65] Loading cluster: addons-816293
	I0923 13:10:48.855802  720939 config.go:182] Loaded profile config "addons-816293": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:10:48.856054  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.870510  720939 addons.go:69] Setting volcano=true in profile "addons-816293"
	I0923 13:10:48.870553  720939 addons.go:234] Setting addon volcano=true in "addons-816293"
	I0923 13:10:48.870594  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.871070  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.871435  720939 addons.go:69] Setting ingress=true in profile "addons-816293"
	I0923 13:10:48.871456  720939 addons.go:234] Setting addon ingress=true in "addons-816293"
	I0923 13:10:48.871492  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.871909  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.892853  720939 addons.go:69] Setting volumesnapshots=true in profile "addons-816293"
	I0923 13:10:48.892899  720939 addons.go:234] Setting addon volumesnapshots=true in "addons-816293"
	I0923 13:10:48.893094  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.893612  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.896683  720939 addons.go:69] Setting ingress-dns=true in profile "addons-816293"
	I0923 13:10:48.896715  720939 addons.go:234] Setting addon ingress-dns=true in "addons-816293"
	I0923 13:10:48.896758  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.898795  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.914428  720939 out.go:177] * Verifying Kubernetes components...
	I0923 13:10:48.917327  720939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:10:48.918529  720939 addons.go:69] Setting inspektor-gadget=true in profile "addons-816293"
	I0923 13:10:48.918566  720939 addons.go:234] Setting addon inspektor-gadget=true in "addons-816293"
	I0923 13:10:48.918604  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:48.919100  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:48.951550  720939 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 13:10:48.966366  720939 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 13:10:48.966391  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 13:10:48.966458  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:48.981708  720939 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 13:10:48.983727  720939 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 13:10:48.983754  720939 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 13:10:48.983841  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.004509  720939 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 13:10:49.005557  720939 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 13:10:49.031045  720939 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 13:10:49.031078  720939 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 13:10:49.031166  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.067402  720939 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 13:10:49.069178  720939 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 13:10:49.071383  720939 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 13:10:49.073399  720939 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 13:10:49.077055  720939 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 13:10:49.029103  720939 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 13:10:49.079685  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 13:10:49.079772  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.099134  720939 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 13:10:49.101341  720939 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 13:10:49.101410  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 13:10:49.101515  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.117649  720939 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 13:10:49.122069  720939 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 13:10:49.126955  720939 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 13:10:49.132915  720939 addons.go:234] Setting addon default-storageclass=true in "addons-816293"
	I0923 13:10:49.137304  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:49.137800  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:49.137975  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:49.143680  720939 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-816293"
	I0923 13:10:49.143725  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:49.144158  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:49.163107  720939 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 13:10:49.164782  720939 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 13:10:49.173087  720939 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 13:10:49.175363  720939 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 13:10:49.183029  720939 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 13:10:49.183067  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 13:10:49.183138  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.183561  720939 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 13:10:49.183575  720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 13:10:49.183626  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.209504  720939 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:10:49.209828  720939 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 13:10:49.213933  720939 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:10:49.214160  720939 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 13:10:49.214176  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 13:10:49.214241  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.223341  720939 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 13:10:49.230023  720939 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 13:10:49.230250  720939 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 13:10:49.230487  720939 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 13:10:49.230517  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 13:10:49.230607  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.234828  720939 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 13:10:49.234854  720939 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 13:10:49.235567  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.237868  720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 13:10:49.237936  720939 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 13:10:49.238030  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.272389  720939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:10:49.274551  720939 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:10:49.274573  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 13:10:49.274633  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.294191  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.296688  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.310657  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.337707  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.338331  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.423616  720939 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 13:10:49.423637  720939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 13:10:49.423710  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.425436  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.427483  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.428691  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.447388  720939 out.go:177]   - Using image docker.io/busybox:stable
	I0923 13:10:49.449688  720939 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 13:10:49.452863  720939 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 13:10:49.452887  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 13:10:49.452971  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:49.457325  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.462385  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.469218  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.469867  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.505080  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:49.508343  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	W0923 13:10:49.511403  720939 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 13:10:49.511432  720939 retry.go:31] will retry after 297.619035ms: ssh: handshake failed: EOF
	I0923 13:10:50.051540  720939 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.134169766s)
	I0923 13:10:50.051624  720939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:10:50.051686  720939 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.237885277s)
	I0923 13:10:50.051826  720939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 13:10:50.055739  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 13:10:50.102856  720939 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 13:10:50.102888  720939 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 13:10:50.114224  720939 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 13:10:50.114251  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 13:10:50.265396  720939 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 13:10:50.265419  720939 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 13:10:50.280469  720939 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 13:10:50.280537  720939 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 13:10:50.286519  720939 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 13:10:50.286585  720939 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 13:10:50.308322  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 13:10:50.329281  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 13:10:50.392095  720939 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 13:10:50.392163  720939 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 13:10:50.438032  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 13:10:50.458750  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 13:10:50.495917  720939 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 13:10:50.495997  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 13:10:50.500665  720939 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 13:10:50.500743  720939 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 13:10:50.504668  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:10:50.532388  720939 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 13:10:50.532461  720939 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 13:10:50.562236  720939 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 13:10:50.562309  720939 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 13:10:50.573925  720939 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 13:10:50.573999  720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 13:10:50.575381  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 13:10:50.603740  720939 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 13:10:50.603768  720939 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 13:10:50.622327  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 13:10:50.631113  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 13:10:50.713379  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 13:10:50.768784  720939 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 13:10:50.768864  720939 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 13:10:50.773988  720939 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 13:10:50.774069  720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 13:10:50.775273  720939 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 13:10:50.775375  720939 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 13:10:50.815549  720939 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 13:10:50.815641  720939 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 13:10:51.120037  720939 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 13:10:51.120120  720939 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 13:10:51.124027  720939 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:10:51.124055  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 13:10:51.128930  720939 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 13:10:51.128972  720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 13:10:51.249958  720939 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 13:10:51.249984  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 13:10:51.417480  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:10:51.448225  720939 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 13:10:51.448255  720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 13:10:51.500479  720939 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 13:10:51.500506  720939 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 13:10:51.573052  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 13:10:51.762571  720939 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 13:10:51.762605  720939 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 13:10:51.783820  720939 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 13:10:51.783845  720939 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 13:10:51.803694  720939 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 13:10:51.803723  720939 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 13:10:51.842288  720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 13:10:51.842318  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 13:10:51.988668  720939 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 13:10:51.988692  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 13:10:52.087480  720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 13:10:52.087507  720939 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 13:10:52.213277  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 13:10:52.444987  720939 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.39333283s)
	I0923 13:10:52.445871  720939 node_ready.go:35] waiting up to 6m0s for node "addons-816293" to be "Ready" ...
	I0923 13:10:52.447067  720939 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.395192038s)
	I0923 13:10:52.447094  720939 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 13:10:52.455977  720939 node_ready.go:49] node "addons-816293" has status "Ready":"True"
	I0923 13:10:52.456004  720939 node_ready.go:38] duration metric: took 10.090688ms for node "addons-816293" to be "Ready" ...
	I0923 13:10:52.456014  720939 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:10:52.471972  720939 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:52.498869  720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 13:10:52.498895  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 13:10:52.950428  720939 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-816293" context rescaled to 1 replicas
	I0923 13:10:52.988785  720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 13:10:52.988808  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 13:10:53.788792  720939 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 13:10:53.788819  720939 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 13:10:54.206674  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 13:10:54.479692  720939 pod_ready.go:103] pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace has status "Ready":"False"
	I0923 13:10:55.548647  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.492862021s)
	I0923 13:10:56.150328  720939 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 13:10:56.150416  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:56.183676  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:56.483692  720939 pod_ready.go:103] pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace has status "Ready":"False"
	I0923 13:10:57.014232  720939 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 13:10:57.147544  720939 addons.go:234] Setting addon gcp-auth=true in "addons-816293"
	I0923 13:10:57.147654  720939 host.go:66] Checking if "addons-816293" exists ...
	I0923 13:10:57.148221  720939 cli_runner.go:164] Run: docker container inspect addons-816293 --format={{.State.Status}}
	I0923 13:10:57.175729  720939 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 13:10:57.175784  720939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-816293
	I0923 13:10:57.212830  720939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/addons-816293/id_rsa Username:docker}
	I0923 13:10:58.985326  720939 pod_ready.go:98] pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:58 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 13:10:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 13:10:50 +0000 UTC,FinishedAt:2024-09-23 13:10:58 +0000 UTC,ContainerID:docker://ad381dc8d22af6fc27463f25c24b43f8b7a59ec491265de548895cafcceca467,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://ad381dc8d22af6fc27463f25c24b43f8b7a59ec491265de548895cafcceca467 Started:0x4001c95610 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4001ccc170} {Name:kube-api-access-b4dm4 MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0x4001ccc180}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 13:10:58.985542  720939 pod_ready.go:82] duration metric: took 6.513524778s for pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace to be "Ready" ...
	E0923 13:10:58.985574  720939 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-4lhtz" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:58 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 13:10:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 13:10:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 13:10:50 +0000 UTC,FinishedAt:2024-09-23 13:10:58 +0000 UTC,ContainerID:docker://ad381dc8d22af6fc27463f25c24b43f8b7a59ec491265de548895cafcceca467,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://ad381dc8d22af6fc27463f25c24b43f8b7a59ec491265de548895cafcceca467 Started:0x4001c95610 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4001ccc170} {Name:kube-api-access-b4dm4 MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x4001ccc180}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 13:10:58.985629  720939 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rwnh8" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.018588  720939 pod_ready.go:93] pod "coredns-7c65d6cfc9-rwnh8" in "kube-system" namespace has status "Ready":"True"
	I0923 13:10:59.018616  720939 pod_ready.go:82] duration metric: took 32.962102ms for pod "coredns-7c65d6cfc9-rwnh8" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.018628  720939 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-816293" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.081556  720939 pod_ready.go:93] pod "etcd-addons-816293" in "kube-system" namespace has status "Ready":"True"
	I0923 13:10:59.081631  720939 pod_ready.go:82] duration metric: took 62.993213ms for pod "etcd-addons-816293" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.081660  720939 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-816293" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.110977  720939 pod_ready.go:93] pod "kube-apiserver-addons-816293" in "kube-system" namespace has status "Ready":"True"
	I0923 13:10:59.111041  720939 pod_ready.go:82] duration metric: took 29.3602ms for pod "kube-apiserver-addons-816293" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.111075  720939 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-816293" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.136659  720939 pod_ready.go:93] pod "kube-controller-manager-addons-816293" in "kube-system" namespace has status "Ready":"True"
	I0923 13:10:59.136682  720939 pod_ready.go:82] duration metric: took 25.586055ms for pod "kube-controller-manager-addons-816293" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.136694  720939 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwjn5" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.384165  720939 pod_ready.go:93] pod "kube-proxy-gwjn5" in "kube-system" namespace has status "Ready":"True"
	I0923 13:10:59.384236  720939 pod_ready.go:82] duration metric: took 247.533229ms for pod "kube-proxy-gwjn5" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.384262  720939 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-816293" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.856482  720939 pod_ready.go:93] pod "kube-scheduler-addons-816293" in "kube-system" namespace has status "Ready":"True"
	I0923 13:10:59.856555  720939 pod_ready.go:82] duration metric: took 472.270779ms for pod "kube-scheduler-addons-816293" in "kube-system" namespace to be "Ready" ...
	I0923 13:10:59.856590  720939 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace to be "Ready" ...
	I0923 13:11:01.290819  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.982459357s)
	I0923 13:11:01.290820  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.961506877s)
	I0923 13:11:01.291014  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.852901161s)
	I0923 13:11:01.291550  720939 addons.go:475] Verifying addon ingress=true in "addons-816293"
	I0923 13:11:01.291071  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.832295444s)
	I0923 13:11:01.291127  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.786363306s)
	I0923 13:11:01.291165  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.715725863s)
	I0923 13:11:01.291184  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.668780146s)
	I0923 13:11:01.291255  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.660067972s)
	I0923 13:11:01.292061  720939 addons.go:475] Verifying addon metrics-server=true in "addons-816293"
	I0923 13:11:01.291278  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.577824294s)
	I0923 13:11:01.292097  720939 addons.go:475] Verifying addon registry=true in "addons-816293"
	I0923 13:11:01.291352  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.873843059s)
	W0923 13:11:01.292356  720939 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 13:11:01.292382  720939 retry.go:31] will retry after 249.364787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 13:11:01.291381  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.718303252s)
	I0923 13:11:01.291433  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.078128989s)
	I0923 13:11:01.295167  720939 out.go:177] * Verifying ingress addon...
	I0923 13:11:01.298271  720939 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-816293 service yakd-dashboard -n yakd-dashboard
	
	I0923 13:11:01.298289  720939 out.go:177] * Verifying registry addon...
	I0923 13:11:01.301154  720939 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 13:11:01.303021  720939 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 13:11:01.337349  720939 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 13:11:01.337428  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:01.337784  720939 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 13:11:01.337808  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:01.542142  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:11:01.810938  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:01.811656  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:01.876598  720939 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace has status "Ready":"False"
	I0923 13:11:02.159697  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.952920258s)
	I0923 13:11:02.159733  720939 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-816293"
	I0923 13:11:02.159930  720939 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.984180043s)
	I0923 13:11:02.163398  720939 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 13:11:02.163467  720939 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:11:02.166369  720939 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 13:11:02.168687  720939 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 13:11:02.170720  720939 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 13:11:02.170749  720939 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 13:11:02.173775  720939 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 13:11:02.173801  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:02.300020  720939 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 13:11:02.300099  720939 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 13:11:02.307811  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:02.308388  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:02.381401  720939 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 13:11:02.381477  720939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 13:11:02.465484  720939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 13:11:02.672220  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:02.805745  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:02.807065  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:03.177797  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:03.309091  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:03.310536  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:03.692589  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:03.808103  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:03.811266  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:03.923334  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.381139499s)
	I0923 13:11:03.995930  720939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.530340717s)
	I0923 13:11:03.999377  720939 addons.go:475] Verifying addon gcp-auth=true in "addons-816293"
	I0923 13:11:04.002755  720939 out.go:177] * Verifying gcp-auth addon...
	I0923 13:11:04.006688  720939 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 13:11:04.013438  720939 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 13:11:04.171421  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:04.307626  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:04.308160  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:04.362776  720939 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace has status "Ready":"False"
	I0923 13:11:04.671127  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:04.805602  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:04.807429  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:05.172539  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:05.306452  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:05.308077  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:05.672265  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:05.806564  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:05.807918  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:06.186368  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:06.310200  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:06.311979  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:06.363260  720939 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace has status "Ready":"False"
	I0923 13:11:06.672458  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:06.805405  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:06.807063  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:07.171342  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:07.309069  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:07.309403  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:07.672182  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:07.805894  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:07.807656  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:07.863759  720939 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace has status "Ready":"True"
	I0923 13:11:07.863844  720939 pod_ready.go:82] duration metric: took 8.007219627s for pod "nvidia-device-plugin-daemonset-95vmg" in "kube-system" namespace to be "Ready" ...
	I0923 13:11:07.863859  720939 pod_ready.go:39] duration metric: took 15.407830427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:11:07.863894  720939 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:11:07.863971  720939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:11:07.889569  720939 api_server.go:72] duration metric: took 19.075852655s to wait for apiserver process to appear ...
	I0923 13:11:07.889593  720939 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:11:07.889617  720939 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:11:07.897686  720939 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 13:11:07.898780  720939 api_server.go:141] control plane version: v1.31.1
	I0923 13:11:07.898810  720939 api_server.go:131] duration metric: took 9.209459ms to wait for apiserver health ...
	I0923 13:11:07.898819  720939 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:11:07.908448  720939 system_pods.go:59] 17 kube-system pods found
	I0923 13:11:07.908492  720939 system_pods.go:61] "coredns-7c65d6cfc9-rwnh8" [3d69dc29-1c82-4b3a-9971-f16148da1c94] Running
	I0923 13:11:07.908502  720939 system_pods.go:61] "csi-hostpath-attacher-0" [fbc9849b-13fd-4116-93fd-e8f8dae194a1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 13:11:07.908518  720939 system_pods.go:61] "csi-hostpath-resizer-0" [cfc38a7b-5b9f-4e7e-af30-e8877917f7e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 13:11:07.908526  720939 system_pods.go:61] "csi-hostpathplugin-c4lh2" [e0fb341e-c2bd-4695-ac48-a02a506144a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 13:11:07.908531  720939 system_pods.go:61] "etcd-addons-816293" [68360a2d-d1c5-43a9-aa74-f5a1933de24f] Running
	I0923 13:11:07.908535  720939 system_pods.go:61] "kube-apiserver-addons-816293" [ec94d9f9-0507-45ca-8e6e-f79a3fc7bec7] Running
	I0923 13:11:07.908539  720939 system_pods.go:61] "kube-controller-manager-addons-816293" [3ea0e19d-de51-48d0-bfa9-ea6e088fe2e9] Running
	I0923 13:11:07.908546  720939 system_pods.go:61] "kube-ingress-dns-minikube" [2efd2e4d-1a47-467e-ad0e-457bae12ae22] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 13:11:07.908556  720939 system_pods.go:61] "kube-proxy-gwjn5" [0af796ff-1040-456d-97b6-df619abe545e] Running
	I0923 13:11:07.908565  720939 system_pods.go:61] "kube-scheduler-addons-816293" [f43016b1-c5cf-4c34-9a8e-21107d5ef1d7] Running
	I0923 13:11:07.908576  720939 system_pods.go:61] "metrics-server-84c5f94fbc-v6k5c" [47fddf7e-71ac-4304-b3a5-52200b9e861f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 13:11:07.908581  720939 system_pods.go:61] "nvidia-device-plugin-daemonset-95vmg" [0441bbd4-ba18-4999-88db-f008dcc67689] Running
	I0923 13:11:07.908591  720939 system_pods.go:61] "registry-66c9cd494c-tgghm" [ec93b34f-db00-4bde-8ed0-46a67564f5cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 13:11:07.908597  720939 system_pods.go:61] "registry-proxy-tf8z6" [7b435d50-4b55-4c70-b6d9-b0e1fd522370] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 13:11:07.908605  720939 system_pods.go:61] "snapshot-controller-56fcc65765-k6tz8" [ade0691a-a8fa-467c-be76-bea4c2d80355] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 13:11:07.908612  720939 system_pods.go:61] "snapshot-controller-56fcc65765-w468l" [e9a88580-956b-467b-9bc2-88466f70ce93] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 13:11:07.908623  720939 system_pods.go:61] "storage-provisioner" [047ae27f-d615-4981-915a-b081568bfd65] Running
	I0923 13:11:07.908631  720939 system_pods.go:74] duration metric: took 9.805532ms to wait for pod list to return data ...
	I0923 13:11:07.908644  720939 default_sa.go:34] waiting for default service account to be created ...
	I0923 13:11:07.911839  720939 default_sa.go:45] found service account: "default"
	I0923 13:11:07.911867  720939 default_sa.go:55] duration metric: took 3.216283ms for default service account to be created ...
	I0923 13:11:07.911879  720939 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 13:11:07.921800  720939 system_pods.go:86] 17 kube-system pods found
	I0923 13:11:07.921882  720939 system_pods.go:89] "coredns-7c65d6cfc9-rwnh8" [3d69dc29-1c82-4b3a-9971-f16148da1c94] Running
	I0923 13:11:07.921907  720939 system_pods.go:89] "csi-hostpath-attacher-0" [fbc9849b-13fd-4116-93fd-e8f8dae194a1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 13:11:07.921932  720939 system_pods.go:89] "csi-hostpath-resizer-0" [cfc38a7b-5b9f-4e7e-af30-e8877917f7e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 13:11:07.921966  720939 system_pods.go:89] "csi-hostpathplugin-c4lh2" [e0fb341e-c2bd-4695-ac48-a02a506144a8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 13:11:07.921992  720939 system_pods.go:89] "etcd-addons-816293" [68360a2d-d1c5-43a9-aa74-f5a1933de24f] Running
	I0923 13:11:07.922013  720939 system_pods.go:89] "kube-apiserver-addons-816293" [ec94d9f9-0507-45ca-8e6e-f79a3fc7bec7] Running
	I0923 13:11:07.922034  720939 system_pods.go:89] "kube-controller-manager-addons-816293" [3ea0e19d-de51-48d0-bfa9-ea6e088fe2e9] Running
	I0923 13:11:07.922070  720939 system_pods.go:89] "kube-ingress-dns-minikube" [2efd2e4d-1a47-467e-ad0e-457bae12ae22] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 13:11:07.922094  720939 system_pods.go:89] "kube-proxy-gwjn5" [0af796ff-1040-456d-97b6-df619abe545e] Running
	I0923 13:11:07.922114  720939 system_pods.go:89] "kube-scheduler-addons-816293" [f43016b1-c5cf-4c34-9a8e-21107d5ef1d7] Running
	I0923 13:11:07.922140  720939 system_pods.go:89] "metrics-server-84c5f94fbc-v6k5c" [47fddf7e-71ac-4304-b3a5-52200b9e861f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 13:11:07.922170  720939 system_pods.go:89] "nvidia-device-plugin-daemonset-95vmg" [0441bbd4-ba18-4999-88db-f008dcc67689] Running
	I0923 13:11:07.922199  720939 system_pods.go:89] "registry-66c9cd494c-tgghm" [ec93b34f-db00-4bde-8ed0-46a67564f5cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 13:11:07.922220  720939 system_pods.go:89] "registry-proxy-tf8z6" [7b435d50-4b55-4c70-b6d9-b0e1fd522370] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 13:11:07.922242  720939 system_pods.go:89] "snapshot-controller-56fcc65765-k6tz8" [ade0691a-a8fa-467c-be76-bea4c2d80355] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 13:11:07.922277  720939 system_pods.go:89] "snapshot-controller-56fcc65765-w468l" [e9a88580-956b-467b-9bc2-88466f70ce93] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 13:11:07.922301  720939 system_pods.go:89] "storage-provisioner" [047ae27f-d615-4981-915a-b081568bfd65] Running
	I0923 13:11:07.922324  720939 system_pods.go:126] duration metric: took 10.432315ms to wait for k8s-apps to be running ...
	I0923 13:11:07.922345  720939 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:11:07.922439  720939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:11:07.936836  720939 system_svc.go:56] duration metric: took 14.481805ms WaitForService to wait for kubelet
	I0923 13:11:07.936863  720939 kubeadm.go:582] duration metric: took 19.123153651s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:11:07.936883  720939 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:11:07.940643  720939 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 13:11:07.940674  720939 node_conditions.go:123] node cpu capacity is 2
	I0923 13:11:07.940686  720939 node_conditions.go:105] duration metric: took 3.797939ms to run NodePressure ...
	I0923 13:11:07.940698  720939 start.go:241] waiting for startup goroutines ...
	I0923 13:11:08.174105  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:08.306693  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:08.310919  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:08.672506  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:08.807134  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:08.808752  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:09.171735  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:09.311572  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:09.313005  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:09.671299  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:09.805678  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:09.807490  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:10.172031  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:10.306350  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:10.307496  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:10.671837  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:10.808087  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:10.809146  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:11.171435  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:11.307432  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:11.308350  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:11.672107  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:11.805290  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:11.808348  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:12.172432  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:12.306839  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:12.307565  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:12.673245  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:12.807093  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:12.808628  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:13.171893  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:13.307380  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:13.308244  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:13.672264  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:13.806166  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:13.809558  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:14.171927  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:14.306058  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:14.309315  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:14.671345  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:14.806556  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:14.808021  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:15.172180  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:15.306493  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:15.309537  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:15.672105  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:15.806191  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:15.808468  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:16.173873  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:16.306866  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:16.308620  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:16.672551  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:16.807032  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:16.808920  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:17.172572  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:17.306504  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:17.307595  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:11:17.671552  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:17.805721  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:17.808301  720939 kapi.go:107] duration metric: took 16.505278276s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 13:11:18.175454  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:18.306020  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:18.677669  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:18.807995  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:19.172319  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:19.306603  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:19.670709  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:19.805883  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:20.172205  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:20.305874  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:20.671605  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:20.806625  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:21.172036  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:21.305564  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:21.671045  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:21.805991  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:22.171662  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:22.306061  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:22.672012  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:22.806034  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:23.171758  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:23.306036  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:23.673094  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:23.810121  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:24.171951  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:24.306275  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:24.671610  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:24.806473  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:25.171691  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:25.305574  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:25.672383  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:25.805458  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:26.172670  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:26.309828  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:26.673643  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:26.806021  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:27.171858  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:27.318971  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:27.677040  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:27.811349  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:28.172454  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:28.306001  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:28.672312  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:28.807423  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:29.171958  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:29.306566  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:29.671338  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:29.805667  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:30.179149  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:30.305907  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:30.671945  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:30.806940  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:31.173333  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:31.306299  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:31.671678  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:31.806192  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:32.171865  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:32.306775  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:32.671065  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:32.806980  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:33.172453  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:33.305599  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:33.671623  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:33.806795  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:34.171707  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:34.305991  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:34.671247  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:34.805387  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:35.178534  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:35.306403  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:35.681463  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:35.807100  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:36.173772  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:36.306758  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:36.674212  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:36.806221  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:37.174544  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:37.305732  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:37.672576  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:37.808315  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:38.171695  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:38.306958  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:38.671664  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:38.807048  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:39.173046  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:39.305298  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:39.671538  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:39.806541  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:40.173084  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:40.306810  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:40.687821  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:40.806889  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:41.172701  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:41.310455  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:41.670953  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:41.806236  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:42.172568  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:42.307505  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:42.672410  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:42.807745  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:43.170859  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:43.305922  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:43.672406  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:43.805994  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:44.172228  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:44.306008  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:44.672422  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:44.806422  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:45.173375  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:45.307653  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:45.672390  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:45.805916  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:46.172136  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:46.305862  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:46.671645  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:46.819026  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:47.172284  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:47.305819  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:47.672551  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:47.806653  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:48.171324  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:48.309416  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:48.672431  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:48.806841  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:49.170908  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:49.309085  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:49.672170  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:49.806069  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:50.173767  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:50.306275  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:50.673376  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:50.806955  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:51.172539  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:51.306774  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:51.673511  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:51.806715  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:52.171496  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:52.306042  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:52.677229  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:52.805671  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:53.176395  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:53.305511  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:53.672457  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:53.805436  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:54.175832  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:54.312720  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:54.670967  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:11:54.806112  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:55.172000  720939 kapi.go:107] duration metric: took 53.005625527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 13:11:55.308212  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:55.805696  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:56.305572  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:56.805951  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:57.306084  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:57.806091  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:58.305892  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:58.805785  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:59.305234  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:11:59.805488  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:00.350297  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:00.805893  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:01.305935  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:01.805784  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:02.306626  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:02.806178  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:03.306094  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:03.805917  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:04.310835  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:04.806245  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:05.306438  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:05.805651  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:06.306173  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:06.805304  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:07.305884  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:07.805890  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:08.306408  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:08.806777  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:09.311879  720939 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:12:09.806668  720939 kapi.go:107] duration metric: took 1m8.505515388s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 13:12:27.512374  720939 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 13:12:27.512402  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:28.015237  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:28.509919  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:29.011529  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:29.511274  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:30.018681  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:30.510657  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:31.011640  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:31.510805  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:32.013712  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:32.511224  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:33.011394  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:33.510227  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:34.011401  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:34.510492  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:35.016200  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:35.511476  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:36.017772  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:36.511291  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:37.014422  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:37.510763  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:38.016874  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:38.510741  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:39.011669  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:39.510482  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:40.015279  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:40.511563  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:41.016464  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:41.510970  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:42.017485  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:42.510908  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:43.012017  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:43.510624  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:44.011303  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:44.510238  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:45.016701  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:45.511279  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:46.012615  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:46.510461  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:47.011812  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:47.510044  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:48.016738  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:48.511115  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:49.010868  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:49.510665  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:50.017050  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:50.511032  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:51.012979  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:51.511304  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:52.012185  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:52.511094  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:53.013078  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:53.509908  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:54.011482  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:54.511017  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:55.017227  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:55.511183  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:56.013357  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:56.510825  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:57.012335  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:57.510388  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:58.010921  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:58.510739  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:59.011815  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:12:59.510533  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:00.043760  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:00.512534  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:01.016282  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:01.510860  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:02.013410  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:02.510733  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:03.020866  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:03.510837  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:04.013555  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:04.510540  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:05.017637  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:05.510963  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:06.015625  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:06.511101  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:07.030045  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:07.511128  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:08.011829  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:08.510550  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:09.012106  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:09.509989  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:10.022421  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:10.510641  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:11.011608  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:11.510322  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:12.013146  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:12.510902  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:13.011081  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:13.511044  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:14.013248  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:14.510483  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:15.015326  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:15.511042  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:16.013804  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:16.510763  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:17.010793  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:17.510965  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:18.013411  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:18.510508  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:19.010935  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:19.510598  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:20.025368  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:20.510464  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:21.011731  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:21.510488  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:22.011292  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:22.510429  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:23.011917  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:23.510596  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:24.014360  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:24.510164  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:25.011575  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:25.511134  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:26.014753  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:26.510775  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:27.014405  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:27.510541  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:28.011216  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:28.511225  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:29.010259  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:29.510098  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:30.011915  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:30.511120  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:31.011612  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:31.510548  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:32.011107  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:32.511438  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:33.013423  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:33.510607  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:34.011937  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:34.511406  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:35.015908  720939 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:13:35.511155  720939 kapi.go:107] duration metric: took 2m31.504575966s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 13:13:35.513465  720939 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-816293 cluster.
	I0923 13:13:35.515962  720939 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 13:13:35.517866  720939 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 13:13:35.519899  720939 out.go:177] * Enabled addons: storage-provisioner-rancher, volcano, cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0923 13:13:35.521552  720939 addons.go:510] duration metric: took 2m46.707488022s for enable addons: enabled=[storage-provisioner-rancher volcano cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0923 13:13:35.521599  720939 start.go:246] waiting for cluster config update ...
	I0923 13:13:35.521620  720939 start.go:255] writing updated cluster config ...
	I0923 13:13:35.521893  720939 ssh_runner.go:195] Run: rm -f paused
	I0923 13:13:35.860079  720939 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 13:13:35.862280  720939 out.go:177] * Done! kubectl is now configured to use "addons-816293" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 13:22:47 addons-816293 dockerd[1285]: time="2024-09-23T13:22:47.499149427Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=f538ed6cef339445 traceID=cc2baa26cc4631cd6d38b1dd019950db
	Sep 23 13:22:47 addons-816293 dockerd[1285]: time="2024-09-23T13:22:47.501585854Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=f538ed6cef339445 traceID=cc2baa26cc4631cd6d38b1dd019950db
	Sep 23 13:22:53 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:22:53Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 13:22:54 addons-816293 dockerd[1285]: time="2024-09-23T13:22:54.861305738Z" level=info msg="ignoring event" container=7668f0ceedb8f4ec2752e9ea660771227eeea826142f518ca7c510d180ecc107 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:22:55 addons-816293 dockerd[1285]: time="2024-09-23T13:22:55.312383133Z" level=info msg="ignoring event" container=00177226599cc99a0b0b1a06432e1fd941a947505cd8bf04d9c7ef879735e76f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:22:55 addons-816293 dockerd[1285]: time="2024-09-23T13:22:55.469836803Z" level=info msg="ignoring event" container=88808b249d7bbbca660ced9ac38e50200e807e0ff1e94030f7dbbafd5c0ec2c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:22:55 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:22:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/83024b1861650a1bf588c0c43be28e03ef0e0f4a30a60d47c62b5c8eb0698db4/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 23 13:22:56 addons-816293 dockerd[1285]: time="2024-09-23T13:22:56.032122262Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" spanID=f10cd9597e0ce995 traceID=1c3c1ec305c29342c5f8a2a907ee94e1
	Sep 23 13:22:56 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:22:56Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 23 13:22:56 addons-816293 dockerd[1285]: time="2024-09-23T13:22:56.750804799Z" level=info msg="ignoring event" container=db417b9e5803bf9e4963bbccc8b738f6accfabc6cd674738370a26ffd3f59c7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:22:58 addons-816293 dockerd[1285]: time="2024-09-23T13:22:58.789101074Z" level=info msg="ignoring event" container=83024b1861650a1bf588c0c43be28e03ef0e0f4a30a60d47c62b5c8eb0698db4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:23:00 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:23:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a964d58d99a33d1a75c4ba8703360cc3bd6aa5467902b5a8efff40c6a716044f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 23 13:23:01 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:23:01Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 23 13:23:01 addons-816293 dockerd[1285]: time="2024-09-23T13:23:01.759623170Z" level=info msg="ignoring event" container=3b8eb6ddc0f11ca8889ffedd16f98154678dbbb07d35b894b2af4d8a7c5ef314 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:23:02 addons-816293 dockerd[1285]: time="2024-09-23T13:23:02.993713477Z" level=info msg="ignoring event" container=a964d58d99a33d1a75c4ba8703360cc3bd6aa5467902b5a8efff40c6a716044f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:23:04 addons-816293 cri-dockerd[1543]: time="2024-09-23T13:23:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b239ddbcd9391811402d602e085c02cb8ac091d0e99d376d8f04685d8cc0c8fd/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 23 13:23:04 addons-816293 dockerd[1285]: time="2024-09-23T13:23:04.850782219Z" level=info msg="ignoring event" container=47d3997eaac56348674539dc0eae3d489623bd0c173c714bb3005fe952c52ec5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:23:06 addons-816293 dockerd[1285]: time="2024-09-23T13:23:06.078118788Z" level=info msg="ignoring event" container=b239ddbcd9391811402d602e085c02cb8ac091d0e99d376d8f04685d8cc0c8fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:23:16 addons-816293 dockerd[1285]: time="2024-09-23T13:23:16.494117357Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=55ff5c9b707c97b2 traceID=7cca6af5846fe79502952676d7653b2e
	Sep 23 13:23:16 addons-816293 dockerd[1285]: time="2024-09-23T13:23:16.497101917Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=55ff5c9b707c97b2 traceID=7cca6af5846fe79502952676d7653b2e
	Sep 23 13:23:32 addons-816293 dockerd[1285]: time="2024-09-23T13:23:32.008758007Z" level=info msg="ignoring event" container=4fc7be12e52b17854c5da685e7ad5670fffb6f2f4640a8c872b292629a89ad62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:23:32 addons-816293 dockerd[1285]: time="2024-09-23T13:23:32.743112541Z" level=info msg="ignoring event" container=7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:23:32 addons-816293 dockerd[1285]: time="2024-09-23T13:23:32.743161460Z" level=info msg="ignoring event" container=305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:23:32 addons-816293 dockerd[1285]: time="2024-09-23T13:23:32.946523914Z" level=info msg="ignoring event" container=86255d787bbf10cf324d74fb05bbd2766725736b24518c28900784907494c5db module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:23:32 addons-816293 dockerd[1285]: time="2024-09-23T13:23:32.995768009Z" level=info msg="ignoring event" container=2ae468bb920e1bfee11a176fdcbc4aa0c4d8bfbb3bbfa20b7c195b7d7465d4b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	47d3997eaac56       fc9db2894f4e4                                                                                                                                29 seconds ago      Exited              helper-pod                               0                   b239ddbcd9391       helper-pod-delete-pvc-3f2fcd29-74af-42b3-bac1-c6876ced45a4
	3b8eb6ddc0f11       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                                              32 seconds ago      Exited              busybox                                  0                   a964d58d99a33       test-local-path
	db417b9e5803b       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              37 seconds ago      Exited              helper-pod                               0                   83024b1861650       helper-pod-create-pvc-3f2fcd29-74af-42b3-bac1-c6876ced45a4
	88808b249d7bb       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            40 seconds ago      Exited              gadget                                   7                   37083df50b06c       gadget-7v9cd
	3584aca3bba88       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   1f9e4bfc2fdd9       gcp-auth-89d5ffd79-2v88k
	029872f6b5cdf       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             11 minutes ago      Running             controller                               0                   46661f5ee0e4a       ingress-nginx-controller-bc57996ff-s62wl
	274979a9fe0e9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   16b075d8daec5       csi-hostpathplugin-c4lh2
	43a0658c6a738       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   16b075d8daec5       csi-hostpathplugin-c4lh2
	d02b14cda814e       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   16b075d8daec5       csi-hostpathplugin-c4lh2
	45f822a835caa       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   16b075d8daec5       csi-hostpathplugin-c4lh2
	f5bd230d2455b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   16b075d8daec5       csi-hostpathplugin-c4lh2
	4a6d829ad645e       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   e3a62a4a2f255       csi-hostpath-resizer-0
	24fbfd17d5243       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   16b075d8daec5       csi-hostpathplugin-c4lh2
	d0d6116c976a3       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   9e888dd2d0cdc       csi-hostpath-attacher-0
	b086b738b7c6e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              patch                                    0                   a2a807762b8df       ingress-nginx-admission-patch-ftkgz
	4f1f570cced09       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              create                                   0                   b37e3d212caf2       ingress-nginx-admission-create-w5qhg
	4c45f509a097f       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   d620a1b72fbd2       snapshot-controller-56fcc65765-k6tz8
	35da5d5624d7e       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   a712d2cbd914c       snapshot-controller-56fcc65765-w468l
	8f5e8f334c8ac       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       12 minutes ago      Running             local-path-provisioner                   0                   8bf5c192f0896       local-path-provisioner-86d989889c-j9gpc
	7db6e438b0f08       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        12 minutes ago      Running             metrics-server                           0                   850c3c8d32af6       metrics-server-84c5f94fbc-v6k5c
	68f2ed892c36d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             12 minutes ago      Running             minikube-ingress-dns                     0                   63e882378472e       kube-ingress-dns-minikube
	7a2b773ac9dfe       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               12 minutes ago      Running             cloud-spanner-emulator                   0                   21d8d148c1eeb       cloud-spanner-emulator-5b584cc74-v58d6
	65ecee846c2c5       ba04bb24b9575                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   c7e3ccd0c5124       storage-provisioner
	26bc8fd7126ec       2f6c962e7b831                                                                                                                                12 minutes ago      Running             coredns                                  0                   bad32c70ad21e       coredns-7c65d6cfc9-rwnh8
	324593818f525       24a140c548c07                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   953326dd498b9       kube-proxy-gwjn5
	264f46b7575fb       7f8aa378bb47d                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   bd9f4a6207b63       kube-scheduler-addons-816293
	03006510c8a1e       d3f53a98c0a9d                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   fda909103c022       kube-apiserver-addons-816293
	bfd0f71f456d2       279f381cb3736                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   c8ef304b7d93e       kube-controller-manager-addons-816293
	7fdbee2111413       27e3830e14027                                                                                                                                12 minutes ago      Running             etcd                                     0                   4df52ac6362f8       etcd-addons-816293
	
	
	==> controller_ingress [029872f6b5cd] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0923 13:12:09.184703       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0923 13:12:09.184845       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0923 13:12:09.199900       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0923 13:12:09.506695       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0923 13:12:09.549903       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0923 13:12:09.562273       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0923 13:12:09.583415       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"bb62a6af-c7f1-4854-9523-4741fa21b40e", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0923 13:12:09.587445       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"0eb70592-0188-41cb-8da2-74243fd19f81", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0923 13:12:09.588026       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"9643e1a1-73c3-405d-be82-80c979875247", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0923 13:12:10.764288       7 nginx.go:317] "Starting NGINX process"
	I0923 13:12:10.764532       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0923 13:12:10.766456       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0923 13:12:10.766991       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0923 13:12:10.787467       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0923 13:12:10.787730       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-s62wl"
	I0923 13:12:10.794219       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-s62wl" node="addons-816293"
	I0923 13:12:10.812213       7 controller.go:213] "Backend successfully reloaded"
	I0923 13:12:10.812511       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0923 13:12:10.812643       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-s62wl", UID:"29821d5d-7904-491d-a7ff-bd0e0644ae09", APIVersion:"v1", ResourceVersion:"1233", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [26bc8fd7126e] <==
	[INFO] 10.244.0.7:34067 - 9749 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107052s
	[INFO] 10.244.0.7:34812 - 57063 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002680123s
	[INFO] 10.244.0.7:34812 - 37346 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002523842s
	[INFO] 10.244.0.7:42141 - 59818 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000166087s
	[INFO] 10.244.0.7:42141 - 42153 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000137123s
	[INFO] 10.244.0.7:54786 - 13928 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198456s
	[INFO] 10.244.0.7:54786 - 15510 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000133989s
	[INFO] 10.244.0.7:49660 - 41606 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109923s
	[INFO] 10.244.0.7:49660 - 2179 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040082s
	[INFO] 10.244.0.7:35772 - 12621 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090855s
	[INFO] 10.244.0.7:35772 - 19787 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039031s
	[INFO] 10.244.0.7:49288 - 41378 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002303323s
	[INFO] 10.244.0.7:49288 - 42943 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002648264s
	[INFO] 10.244.0.7:58628 - 45768 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081189s
	[INFO] 10.244.0.7:58628 - 61383 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010299s
	[INFO] 10.244.0.25:35480 - 44193 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000200311s
	[INFO] 10.244.0.25:37735 - 28043 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000186583s
	[INFO] 10.244.0.25:43220 - 59356 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108717s
	[INFO] 10.244.0.25:58980 - 58863 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000065993s
	[INFO] 10.244.0.25:47844 - 42780 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000098748s
	[INFO] 10.244.0.25:36647 - 9954 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088598s
	[INFO] 10.244.0.25:47597 - 42144 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002161608s
	[INFO] 10.244.0.25:44744 - 13618 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002070237s
	[INFO] 10.244.0.25:55206 - 26399 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001733714s
	[INFO] 10.244.0.25:45039 - 27901 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001583799s
	
	
	==> describe nodes <==
	Name:               addons-816293
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-816293
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-816293
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_10_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-816293
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-816293"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:10:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-816293
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:23:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:23:18 +0000   Mon, 23 Sep 2024 13:10:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:23:18 +0000   Mon, 23 Sep 2024 13:10:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:23:18 +0000   Mon, 23 Sep 2024 13:10:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:23:18 +0000   Mon, 23 Sep 2024 13:10:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-816293
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 189321b7b41a49ecb3cbdf57a41c9ca7
	  System UUID:                c0380f16-60fd-4321-84ff-494177588bf5
	  Boot ID:                    a368a3b9-64b6-4915-adf4-926cc803503e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-5b584cc74-v58d6      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-7v9cd                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-2v88k                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-s62wl    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-rwnh8                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-c4lh2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-816293                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-816293                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-816293       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-gwjn5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-816293                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-v6k5c             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-k6tz8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-w468l        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-j9gpc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-816293 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-816293 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-816293 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-816293 event: Registered Node addons-816293 in Controller
	
	
	==> dmesg <==
	[Sep23 12:41] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.214721] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.310277] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	
	
	==> etcd [7fdbee211141] <==
	{"level":"info","ts":"2024-09-23T13:10:37.343567Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-23T13:10:37.343578Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-23T13:10:37.707811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-23T13:10:37.708069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-23T13:10:37.708230Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-23T13:10:37.708320Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T13:10:37.708408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T13:10:37.708504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-23T13:10:37.708609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T13:10:37.713082Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:10:37.717133Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-816293 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T13:10:37.717403Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:10:37.717895Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:10:37.718188Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T13:10:37.718289Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T13:10:37.719054Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:10:37.727465Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T13:10:37.725077Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:10:37.727859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:10:37.727963Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:10:37.726458Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:10:37.737137Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T13:20:38.293461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1849}
	{"level":"info","ts":"2024-09-23T13:20:38.357061Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1849,"took":"63.067825ms","hash":505204955,"current-db-size-bytes":8933376,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4931584,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-23T13:20:38.357107Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":505204955,"revision":1849,"compact-revision":-1}
	
	
	==> gcp-auth [3584aca3bba8] <==
	2024/09/23 13:13:34 GCP Auth Webhook started!
	2024/09/23 13:13:52 Ready to marshal response ...
	2024/09/23 13:13:52 Ready to write response ...
	2024/09/23 13:13:52 Ready to marshal response ...
	2024/09/23 13:13:52 Ready to write response ...
	2024/09/23 13:14:17 Ready to marshal response ...
	2024/09/23 13:14:17 Ready to write response ...
	2024/09/23 13:14:17 Ready to marshal response ...
	2024/09/23 13:14:17 Ready to write response ...
	2024/09/23 13:14:17 Ready to marshal response ...
	2024/09/23 13:14:17 Ready to write response ...
	2024/09/23 13:22:21 Ready to marshal response ...
	2024/09/23 13:22:21 Ready to write response ...
	2024/09/23 13:22:21 Ready to marshal response ...
	2024/09/23 13:22:21 Ready to write response ...
	2024/09/23 13:22:21 Ready to marshal response ...
	2024/09/23 13:22:21 Ready to write response ...
	2024/09/23 13:22:31 Ready to marshal response ...
	2024/09/23 13:22:31 Ready to write response ...
	2024/09/23 13:22:55 Ready to marshal response ...
	2024/09/23 13:22:55 Ready to write response ...
	2024/09/23 13:22:55 Ready to marshal response ...
	2024/09/23 13:22:55 Ready to write response ...
	2024/09/23 13:23:04 Ready to marshal response ...
	2024/09/23 13:23:04 Ready to write response ...
	
	
	==> kernel <==
	 13:23:34 up  3:06,  0 users,  load average: 0.81, 0.77, 1.48
	Linux addons-816293 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [03006510c8a1] <==
	E0923 13:13:06.992214       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.42.25:443: connect: connection refused" logger="UnhandledError"
	W0923 13:13:07.037379       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.42.25:443: connect: connection refused
	E0923 13:13:07.037425       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.42.25:443: connect: connection refused" logger="UnhandledError"
	I0923 13:13:52.419920       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0923 13:13:52.452109       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0923 13:14:07.279004       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0923 13:14:07.352476       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	E0923 13:14:07.621951       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-controllers\" not found]"
	I0923 13:14:07.721138       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 13:14:07.743767       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	E0923 13:14:07.812209       1 watch.go:250] "Unhandled Error" err="write tcp 192.168.49.2:8443->10.244.0.16:50960: write: connection reset by peer" logger="UnhandledError"
	I0923 13:14:07.884628       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0923 13:14:07.923620       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 13:14:08.113434       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 13:14:08.173518       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 13:14:08.353854       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0923 13:14:08.452461       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0923 13:14:08.858884       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0923 13:14:08.924724       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0923 13:14:08.945972       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0923 13:14:09.024185       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0923 13:14:09.413014       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 13:14:09.539604       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0923 13:22:21.438806       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.29.145"}
	E0923 13:23:20.074681       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [bfd0f71f456d] <==
	W0923 13:22:34.337066       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:22:34.337112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:22:39.272097       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:22:39.272143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 13:22:42.269702       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0923 13:22:43.935271       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="8.664µs"
	I0923 13:22:47.473267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-816293"
	I0923 13:22:54.067390       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0923 13:22:58.042054       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:22:58.042099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:23:04.586640       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:23:04.586681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 13:23:04.664576       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="8.689µs"
	W0923 13:23:08.335864       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:23:08.335907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:23:10.445700       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:23:10.445747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:23:16.444595       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:23:16.444644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:23:17.132563       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:23:17.132606       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 13:23:18.477845       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-816293"
	I0923 13:23:32.603010       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.226µs"
	W0923 13:23:33.226377       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:23:33.226432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [324593818f52] <==
	I0923 13:10:48.679792       1 server_linux.go:66] "Using iptables proxy"
	I0923 13:10:48.880659       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 13:10:48.880729       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:10:48.921053       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 13:10:48.921291       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:10:48.923508       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:10:48.924391       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:10:48.925096       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:10:48.956295       1 config.go:199] "Starting service config controller"
	I0923 13:10:48.956531       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:10:48.956699       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:10:48.956793       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:10:48.961108       1 config.go:328] "Starting node config controller"
	I0923 13:10:48.961296       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:10:49.057264       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:10:49.057325       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:10:49.070322       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [264f46b7575f] <==
	W0923 13:10:41.513614       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 13:10:41.513641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:10:41.513697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 13:10:41.513708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:10:41.513875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 13:10:41.513902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 13:10:41.513971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:10:41.513989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:10:41.514050       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 13:10:41.514065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:10:41.514137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:10:41.514152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:10:41.514210       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:10:41.514225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:10:41.514266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 13:10:41.514280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:10:41.514450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0923 13:10:41.514594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 13:10:41.514619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:10:41.514641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 13:10:41.514653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0923 13:10:41.514675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:10:41.515021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 13:10:41.515047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 13:10:42.805901       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:23:25 addons-816293 kubelet[2339]: I0923 13:23:25.307700    2339 scope.go:117] "RemoveContainer" containerID="88808b249d7bbbca660ced9ac38e50200e807e0ff1e94030f7dbbafd5c0ec2c9"
	Sep 23 13:23:25 addons-816293 kubelet[2339]: E0923 13:23:25.307985    2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-7v9cd_gadget(d6a0eec7-6353-45b0-b40c-7d1b00387139)\"" pod="gadget/gadget-7v9cd" podUID="d6a0eec7-6353-45b0-b40c-7d1b00387139"
	Sep 23 13:23:25 addons-816293 kubelet[2339]: E0923 13:23:25.310718    2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="3365eeb6-9e62-4b5a-917e-979eac5a9b59"
	Sep 23 13:23:28 addons-816293 kubelet[2339]: E0923 13:23:28.311294    2339 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="f5de9158-d9e6-4b50-894e-b5d96aa9b8a2"
	Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.129645    2339 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-gcp-creds\") pod \"f5de9158-d9e6-4b50-894e-b5d96aa9b8a2\" (UID: \"f5de9158-d9e6-4b50-894e-b5d96aa9b8a2\") "
	Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.130191    2339 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-866mw\" (UniqueName: \"kubernetes.io/projected/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-kube-api-access-866mw\") pod \"f5de9158-d9e6-4b50-894e-b5d96aa9b8a2\" (UID: \"f5de9158-d9e6-4b50-894e-b5d96aa9b8a2\") "
	Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.130130    2339 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f5de9158-d9e6-4b50-894e-b5d96aa9b8a2" (UID: "f5de9158-d9e6-4b50-894e-b5d96aa9b8a2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.136835    2339 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-kube-api-access-866mw" (OuterVolumeSpecName: "kube-api-access-866mw") pod "f5de9158-d9e6-4b50-894e-b5d96aa9b8a2" (UID: "f5de9158-d9e6-4b50-894e-b5d96aa9b8a2"). InnerVolumeSpecName "kube-api-access-866mw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.230658    2339 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-gcp-creds\") on node \"addons-816293\" DevicePath \"\""
	Sep 23 13:23:32 addons-816293 kubelet[2339]: I0923 13:23:32.230700    2339 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-866mw\" (UniqueName: \"kubernetes.io/projected/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2-kube-api-access-866mw\") on node \"addons-816293\" DevicePath \"\""
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.140658    2339 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn2rj\" (UniqueName: \"kubernetes.io/projected/ec93b34f-db00-4bde-8ed0-46a67564f5cc-kube-api-access-wn2rj\") pod \"ec93b34f-db00-4bde-8ed0-46a67564f5cc\" (UID: \"ec93b34f-db00-4bde-8ed0-46a67564f5cc\") "
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.140712    2339 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptxgl\" (UniqueName: \"kubernetes.io/projected/7b435d50-4b55-4c70-b6d9-b0e1fd522370-kube-api-access-ptxgl\") pod \"7b435d50-4b55-4c70-b6d9-b0e1fd522370\" (UID: \"7b435d50-4b55-4c70-b6d9-b0e1fd522370\") "
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.145861    2339 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b435d50-4b55-4c70-b6d9-b0e1fd522370-kube-api-access-ptxgl" (OuterVolumeSpecName: "kube-api-access-ptxgl") pod "7b435d50-4b55-4c70-b6d9-b0e1fd522370" (UID: "7b435d50-4b55-4c70-b6d9-b0e1fd522370"). InnerVolumeSpecName "kube-api-access-ptxgl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.150981    2339 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec93b34f-db00-4bde-8ed0-46a67564f5cc-kube-api-access-wn2rj" (OuterVolumeSpecName: "kube-api-access-wn2rj") pod "ec93b34f-db00-4bde-8ed0-46a67564f5cc" (UID: "ec93b34f-db00-4bde-8ed0-46a67564f5cc"). InnerVolumeSpecName "kube-api-access-wn2rj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.241100    2339 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wn2rj\" (UniqueName: \"kubernetes.io/projected/ec93b34f-db00-4bde-8ed0-46a67564f5cc-kube-api-access-wn2rj\") on node \"addons-816293\" DevicePath \"\""
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.241153    2339 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ptxgl\" (UniqueName: \"kubernetes.io/projected/7b435d50-4b55-4c70-b6d9-b0e1fd522370-kube-api-access-ptxgl\") on node \"addons-816293\" DevicePath \"\""
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.325648    2339 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5de9158-d9e6-4b50-894e-b5d96aa9b8a2" path="/var/lib/kubelet/pods/f5de9158-d9e6-4b50-894e-b5d96aa9b8a2/volumes"
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.451361    2339 scope.go:117] "RemoveContainer" containerID="7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80"
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.532231    2339 scope.go:117] "RemoveContainer" containerID="7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80"
	Sep 23 13:23:33 addons-816293 kubelet[2339]: E0923 13:23:33.533805    2339 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80" containerID="7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80"
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.533850    2339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80"} err="failed to get container status \"7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7e1fe19f5e027be257dc4e6a04138ddb1004588180fc0e62d3ac3b175b519e80"
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.533887    2339 scope.go:117] "RemoveContainer" containerID="305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c"
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.568138    2339 scope.go:117] "RemoveContainer" containerID="305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c"
	Sep 23 13:23:33 addons-816293 kubelet[2339]: E0923 13:23:33.569342    2339 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c" containerID="305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c"
	Sep 23 13:23:33 addons-816293 kubelet[2339]: I0923 13:23:33.569383    2339 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c"} err="failed to get container status \"305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c\": rpc error: code = Unknown desc = Error response from daemon: No such container: 305fc2e9c28adbe9bbd8e58d7553be1be27ea57fe28fa34d6c577bf747d9619c"
	
	
	==> storage-provisioner [65ecee846c2c] <==
	I0923 13:10:54.957657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 13:10:54.983775       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 13:10:54.983816       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 13:10:54.998064       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 13:10:55.000412       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-816293_b7915d01-4c14-48c9-bfcd-2780ccded785!
	I0923 13:10:55.005719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea18252a-3635-419b-b449-ef6bd3393b94", APIVersion:"v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-816293_b7915d01-4c14-48c9-bfcd-2780ccded785 became leader
	I0923 13:10:55.100907       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-816293_b7915d01-4c14-48c9-bfcd-2780ccded785!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-816293 -n addons-816293
helpers_test.go:261: (dbg) Run:  kubectl --context addons-816293 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-w5qhg ingress-nginx-admission-patch-ftkgz local-path-provisioner-86d989889c-j9gpc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-816293 describe pod busybox ingress-nginx-admission-create-w5qhg ingress-nginx-admission-patch-ftkgz local-path-provisioner-86d989889c-j9gpc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-816293 describe pod busybox ingress-nginx-admission-create-w5qhg ingress-nginx-admission-patch-ftkgz local-path-provisioner-86d989889c-j9gpc: exit status 1 (100.714307ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-816293/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 13:14:17 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4nlm7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4nlm7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m18s                  default-scheduler  Successfully assigned default/busybox to addons-816293
	  Normal   Pulling    7m48s (x4 over 9m18s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m17s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m17s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m17s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x21 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-w5qhg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ftkgz" not found
	Error from server (NotFound): pods "local-path-provisioner-86d989889c-j9gpc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-816293 describe pod busybox ingress-nginx-admission-create-w5qhg ingress-nginx-admission-patch-ftkgz local-path-provisioner-86d989889c-j9gpc: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.69s)

                                                
                                    

Test pass (318/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.43
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 6.11
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.55
22 TestOffline 94.47
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 222.04
29 TestAddons/serial/Volcano 41.14
31 TestAddons/serial/GCPAuth/Namespaces 0.17
34 TestAddons/parallel/Ingress 19.99
35 TestAddons/parallel/InspektorGadget 11.75
36 TestAddons/parallel/MetricsServer 5.71
38 TestAddons/parallel/CSI 46.09
39 TestAddons/parallel/Headlamp 16.82
40 TestAddons/parallel/CloudSpanner 5.52
41 TestAddons/parallel/LocalPath 52.73
42 TestAddons/parallel/NvidiaDevicePlugin 5.63
43 TestAddons/parallel/Yakd 11.77
44 TestAddons/StoppedEnableDisable 6.12
45 TestCertOptions 38.31
46 TestCertExpiration 251.61
47 TestDockerFlags 44.25
48 TestForceSystemdFlag 46.84
49 TestForceSystemdEnv 46.22
55 TestErrorSpam/setup 35.43
56 TestErrorSpam/start 0.72
57 TestErrorSpam/status 1
58 TestErrorSpam/pause 1.39
59 TestErrorSpam/unpause 1.58
60 TestErrorSpam/stop 10.99
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 45.04
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 33.63
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.35
72 TestFunctional/serial/CacheCmd/cache/add_local 0.91
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.43
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
80 TestFunctional/serial/ExtraConfig 41.8
81 TestFunctional/serial/ComponentHealth 0.11
82 TestFunctional/serial/LogsCmd 1.13
83 TestFunctional/serial/LogsFileCmd 1.13
84 TestFunctional/serial/InvalidService 4.61
86 TestFunctional/parallel/ConfigCmd 0.54
87 TestFunctional/parallel/DashboardCmd 12.99
88 TestFunctional/parallel/DryRun 0.44
89 TestFunctional/parallel/InternationalLanguage 0.18
90 TestFunctional/parallel/StatusCmd 1.03
94 TestFunctional/parallel/ServiceCmdConnect 12.6
95 TestFunctional/parallel/AddonsCmd 0.2
96 TestFunctional/parallel/PersistentVolumeClaim 28.34
98 TestFunctional/parallel/SSHCmd 0.7
99 TestFunctional/parallel/CpCmd 2.43
101 TestFunctional/parallel/FileSync 0.37
102 TestFunctional/parallel/CertSync 2.12
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.33
110 TestFunctional/parallel/License 0.29
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.48
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.26
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
124 TestFunctional/parallel/ProfileCmd/profile_list 0.4
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
126 TestFunctional/parallel/ServiceCmd/List 0.61
127 TestFunctional/parallel/MountCmd/any-port 7.64
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
130 TestFunctional/parallel/ServiceCmd/Format 0.46
131 TestFunctional/parallel/ServiceCmd/URL 0.44
132 TestFunctional/parallel/MountCmd/specific-port 2.26
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.51
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.08
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.68
141 TestFunctional/parallel/ImageCommands/Setup 0.77
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.05
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.78
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.07
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
149 TestFunctional/parallel/DockerEnv/bash 1.3
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.89
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 127.42
160 TestMultiControlPlane/serial/DeployApp 8.84
161 TestMultiControlPlane/serial/PingHostFromPods 1.68
162 TestMultiControlPlane/serial/AddWorkerNode 26.44
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
165 TestMultiControlPlane/serial/CopyFile 19.85
166 TestMultiControlPlane/serial/StopSecondaryNode 11.75
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
168 TestMultiControlPlane/serial/RestartSecondaryNode 49.85
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.03
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 252.95
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.2
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
173 TestMultiControlPlane/serial/StopCluster 33.14
174 TestMultiControlPlane/serial/RestartCluster 98.58
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
176 TestMultiControlPlane/serial/AddSecondaryNode 46.91
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.02
180 TestImageBuild/serial/Setup 34.7
181 TestImageBuild/serial/NormalBuild 1.81
182 TestImageBuild/serial/BuildWithBuildArg 1
183 TestImageBuild/serial/BuildWithDockerIgnore 0.86
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.05
188 TestJSONOutput/start/Command 39.02
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.92
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.53
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 5.84
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.23
213 TestKicCustomNetwork/create_custom_network 31.85
214 TestKicCustomNetwork/use_default_bridge_network 33.1
215 TestKicExistingNetwork 31.67
216 TestKicCustomSubnet 34.77
217 TestKicStaticIP 34.92
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 70.01
222 TestMountStart/serial/StartWithMountFirst 7.78
223 TestMountStart/serial/VerifyMountFirst 0.26
224 TestMountStart/serial/StartWithMountSecond 8.9
225 TestMountStart/serial/VerifyMountSecond 0.26
226 TestMountStart/serial/DeleteFirst 1.48
227 TestMountStart/serial/VerifyMountPostDelete 0.26
228 TestMountStart/serial/Stop 1.21
229 TestMountStart/serial/RestartStopped 8.09
230 TestMountStart/serial/VerifyMountPostStop 0.27
233 TestMultiNode/serial/FreshStart2Nodes 83.44
234 TestMultiNode/serial/DeployApp2Nodes 50.71
235 TestMultiNode/serial/PingHostFrom2Pods 1.15
236 TestMultiNode/serial/AddNode 18.5
237 TestMultiNode/serial/MultiNodeLabels 0.12
238 TestMultiNode/serial/ProfileList 0.8
239 TestMultiNode/serial/CopyFile 10.04
240 TestMultiNode/serial/StopNode 2.24
241 TestMultiNode/serial/StartAfterStop 10.98
242 TestMultiNode/serial/RestartKeepsNodes 104.29
243 TestMultiNode/serial/DeleteNode 5.81
244 TestMultiNode/serial/StopMultiNode 21.75
245 TestMultiNode/serial/RestartMultiNode 56.97
246 TestMultiNode/serial/ValidateNameConflict 34.97
251 TestPreload 153.12
253 TestScheduledStopUnix 105.55
254 TestSkaffold 117.8
256 TestInsufficientStorage 11.92
257 TestRunningBinaryUpgrade 79.65
259 TestKubernetesUpgrade 383.99
260 TestMissingContainerUpgrade 116.51
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
263 TestNoKubernetes/serial/StartWithK8s 42.81
264 TestNoKubernetes/serial/StartWithStopK8s 18.91
265 TestNoKubernetes/serial/Start 9.96
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
267 TestNoKubernetes/serial/ProfileList 1.04
268 TestNoKubernetes/serial/Stop 1.27
269 TestNoKubernetes/serial/StartNoArgs 7.92
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
282 TestStoppedBinaryUpgrade/Setup 1.06
283 TestStoppedBinaryUpgrade/Upgrade 122.07
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.35
293 TestPause/serial/Start 75.86
294 TestPause/serial/SecondStartNoReconfiguration 27.75
295 TestPause/serial/Pause 0.65
296 TestPause/serial/VerifyStatus 0.34
297 TestPause/serial/Unpause 0.53
298 TestPause/serial/PauseAgain 0.86
299 TestPause/serial/DeletePaused 2.18
300 TestPause/serial/VerifyDeletedResources 0.46
301 TestNetworkPlugins/group/auto/Start 49.93
302 TestNetworkPlugins/group/auto/KubeletFlags 0.28
303 TestNetworkPlugins/group/auto/NetCatPod 10.28
304 TestNetworkPlugins/group/auto/DNS 0.21
305 TestNetworkPlugins/group/auto/Localhost 0.17
306 TestNetworkPlugins/group/auto/HairPin 0.17
307 TestNetworkPlugins/group/kindnet/Start 82.61
308 TestNetworkPlugins/group/calico/Start 90.02
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
311 TestNetworkPlugins/group/kindnet/NetCatPod 11.43
312 TestNetworkPlugins/group/kindnet/DNS 0.39
313 TestNetworkPlugins/group/kindnet/Localhost 0.31
314 TestNetworkPlugins/group/kindnet/HairPin 0.28
315 TestNetworkPlugins/group/custom-flannel/Start 60.56
316 TestNetworkPlugins/group/calico/ControllerPod 6.01
317 TestNetworkPlugins/group/calico/KubeletFlags 0.41
318 TestNetworkPlugins/group/calico/NetCatPod 13.33
319 TestNetworkPlugins/group/calico/DNS 0.38
320 TestNetworkPlugins/group/calico/Localhost 0.27
321 TestNetworkPlugins/group/calico/HairPin 0.21
322 TestNetworkPlugins/group/false/Start 83.53
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.36
325 TestNetworkPlugins/group/custom-flannel/DNS 0.29
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
328 TestNetworkPlugins/group/enable-default-cni/Start 47.28
329 TestNetworkPlugins/group/false/KubeletFlags 0.28
330 TestNetworkPlugins/group/false/NetCatPod 10.28
331 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
332 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
333 TestNetworkPlugins/group/false/DNS 0.22
334 TestNetworkPlugins/group/false/Localhost 0.18
335 TestNetworkPlugins/group/false/HairPin 0.23
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.29
339 TestNetworkPlugins/group/flannel/Start 69.04
340 TestNetworkPlugins/group/bridge/Start 86.39
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
343 TestNetworkPlugins/group/flannel/NetCatPod 11.26
344 TestNetworkPlugins/group/flannel/DNS 0.2
345 TestNetworkPlugins/group/flannel/Localhost 0.16
346 TestNetworkPlugins/group/flannel/HairPin 0.17
347 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
348 TestNetworkPlugins/group/bridge/NetCatPod 11.38
349 TestNetworkPlugins/group/bridge/DNS 0.39
350 TestNetworkPlugins/group/bridge/Localhost 0.23
351 TestNetworkPlugins/group/bridge/HairPin 0.22
352 TestNetworkPlugins/group/kubenet/Start 53.85
354 TestStartStop/group/old-k8s-version/serial/FirstStart 153.61
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.44
356 TestNetworkPlugins/group/kubenet/NetCatPod 10.37
357 TestNetworkPlugins/group/kubenet/DNS 0.28
358 TestNetworkPlugins/group/kubenet/Localhost 0.28
359 TestNetworkPlugins/group/kubenet/HairPin 0.26
361 TestStartStop/group/no-preload/serial/FirstStart 85.09
362 TestStartStop/group/no-preload/serial/DeployApp 8.35
363 TestStartStop/group/old-k8s-version/serial/DeployApp 12.5
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
365 TestStartStop/group/no-preload/serial/Stop 11.16
366 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
367 TestStartStop/group/old-k8s-version/serial/Stop 11.07
368 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
369 TestStartStop/group/no-preload/serial/SecondStart 292.81
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
371 TestStartStop/group/old-k8s-version/serial/SecondStart 32.2
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 30.01
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
374 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
375 TestStartStop/group/old-k8s-version/serial/Pause 2.84
377 TestStartStop/group/embed-certs/serial/FirstStart 78.05
378 TestStartStop/group/embed-certs/serial/DeployApp 9.35
379 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
380 TestStartStop/group/embed-certs/serial/Stop 10.83
381 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
382 TestStartStop/group/embed-certs/serial/SecondStart 265.79
383 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
386 TestStartStop/group/no-preload/serial/Pause 2.99
388 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.84
389 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.37
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
391 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.91
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.35
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
397 TestStartStop/group/embed-certs/serial/Pause 3.01
399 TestStartStop/group/newest-cni/serial/FirstStart 39.42
400 TestStartStop/group/newest-cni/serial/DeployApp 0
401 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
402 TestStartStop/group/newest-cni/serial/Stop 9.11
403 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
404 TestStartStop/group/newest-cni/serial/SecondStart 19.91
405 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
406 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
408 TestStartStop/group/newest-cni/serial/Pause 2.9
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.1
x
+
TestDownloadOnly/v1.20.0/json-events (7.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-223839 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-223839 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.431155361s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 13:09:45.488369  720192 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0923 13:09:45.488462  720192 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-223839
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-223839: exit status 85 (76.43138ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-223839 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC |          |
	|         | -p download-only-223839        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:09:38
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:09:38.103387  720197 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:09:38.103579  720197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:09:38.103589  720197 out.go:358] Setting ErrFile to fd 2...
	I0923 13:09:38.103595  720197 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:09:38.103857  720197 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
	W0923 13:09:38.103998  720197 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19690-714802/.minikube/config/config.json: open /home/jenkins/minikube-integration/19690-714802/.minikube/config/config.json: no such file or directory
	I0923 13:09:38.104397  720197 out.go:352] Setting JSON to true
	I0923 13:09:38.105301  720197 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10326,"bootTime":1727086652,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0923 13:09:38.105375  720197 start.go:139] virtualization:  
	I0923 13:09:38.108205  720197 out.go:97] [download-only-223839] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0923 13:09:38.108395  720197 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 13:09:38.108498  720197 notify.go:220] Checking for updates...
	I0923 13:09:38.110490  720197 out.go:169] MINIKUBE_LOCATION=19690
	I0923 13:09:38.113184  720197 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:09:38.115368  720197 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig
	I0923 13:09:38.117302  720197 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube
	I0923 13:09:38.119348  720197 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 13:09:38.123302  720197 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 13:09:38.123556  720197 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:09:38.144832  720197 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:09:38.144980  720197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:09:38.202168  720197 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:09:38.192380359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:09:38.202279  720197 docker.go:318] overlay module found
	I0923 13:09:38.204154  720197 out.go:97] Using the docker driver based on user configuration
	I0923 13:09:38.204182  720197 start.go:297] selected driver: docker
	I0923 13:09:38.204188  720197 start.go:901] validating driver "docker" against <nil>
	I0923 13:09:38.204300  720197 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:09:38.256656  720197 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:09:38.24700287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:09:38.256868  720197 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:09:38.257189  720197 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 13:09:38.257344  720197 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 13:09:38.259889  720197 out.go:169] Using Docker driver with root privileges
	I0923 13:09:38.261577  720197 cni.go:84] Creating CNI manager for ""
	I0923 13:09:38.261659  720197 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 13:09:38.261757  720197 start.go:340] cluster config:
	{Name:download-only-223839 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-223839 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:09:38.263568  720197 out.go:97] Starting "download-only-223839" primary control-plane node in "download-only-223839" cluster
	I0923 13:09:38.263595  720197 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 13:09:38.265556  720197 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:09:38.265591  720197 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 13:09:38.265766  720197 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:09:38.280164  720197 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:09:38.280347  720197 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 13:09:38.280453  720197 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:09:38.325712  720197 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 13:09:38.325739  720197 cache.go:56] Caching tarball of preloaded images
	I0923 13:09:38.325896  720197 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 13:09:38.328224  720197 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 13:09:38.328254  720197 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 13:09:38.421553  720197 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-223839 host does not exist
	  To start a cluster, run: "minikube start -p download-only-223839"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-223839
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-136397 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-136397 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.110367457s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 13:09:52.016815  720192 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 13:09:52.016854  720192 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-136397
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-136397: exit status 85 (68.680154ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-223839 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC |                     |
	|         | -p download-only-223839        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
	| delete  | -p download-only-223839        | download-only-223839 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
	| start   | -o=json --download-only        | download-only-136397 | jenkins | v1.34.0 | 23 Sep 24 13:09 UTC |                     |
	|         | -p download-only-136397        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:09:45
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:09:45.946259  720394 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:09:45.946396  720394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:09:45.946407  720394 out.go:358] Setting ErrFile to fd 2...
	I0923 13:09:45.946413  720394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:09:45.947053  720394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
	I0923 13:09:45.947553  720394 out.go:352] Setting JSON to true
	I0923 13:09:45.948496  720394 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10334,"bootTime":1727086652,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0923 13:09:45.948603  720394 start.go:139] virtualization:  
	I0923 13:09:45.950938  720394 out.go:97] [download-only-136397] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:09:45.951160  720394 notify.go:220] Checking for updates...
	I0923 13:09:45.953278  720394 out.go:169] MINIKUBE_LOCATION=19690
	I0923 13:09:45.955546  720394 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:09:45.957436  720394 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig
	I0923 13:09:45.959407  720394 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube
	I0923 13:09:45.961250  720394 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 13:09:45.965047  720394 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 13:09:45.965422  720394 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:09:45.994603  720394 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:09:45.994739  720394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:09:46.051752  720394 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 13:09:46.040815693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:09:46.051893  720394 docker.go:318] overlay module found
	I0923 13:09:46.053838  720394 out.go:97] Using the docker driver based on user configuration
	I0923 13:09:46.053873  720394 start.go:297] selected driver: docker
	I0923 13:09:46.053881  720394 start.go:901] validating driver "docker" against <nil>
	I0923 13:09:46.054023  720394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:09:46.110695  720394 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 13:09:46.100611959 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:09:46.110911  720394 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:09:46.111207  720394 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 13:09:46.111401  720394 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 13:09:46.113538  720394 out.go:169] Using Docker driver with root privileges
	I0923 13:09:46.115952  720394 cni.go:84] Creating CNI manager for ""
	I0923 13:09:46.116032  720394 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 13:09:46.116044  720394 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 13:09:46.116157  720394 start.go:340] cluster config:
	{Name:download-only-136397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-136397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:09:46.118117  720394 out.go:97] Starting "download-only-136397" primary control-plane node in "download-only-136397" cluster
	I0923 13:09:46.118155  720394 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 13:09:46.119987  720394 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:09:46.120033  720394 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:09:46.120115  720394 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:09:46.135500  720394 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:09:46.135645  720394 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 13:09:46.135671  720394 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 13:09:46.135677  720394 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 13:09:46.135689  720394 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 13:09:46.180161  720394 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 13:09:46.180202  720394 cache.go:56] Caching tarball of preloaded images
	I0923 13:09:46.180381  720394 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:09:46.182409  720394 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 13:09:46.182440  720394 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0923 13:09:46.265798  720394 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 13:09:50.425721  720394 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0923 13:09:50.425848  720394 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19690-714802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-136397 host does not exist
	  To start a cluster, run: "minikube start -p download-only-136397"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-136397
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 13:09:53.215293  720192 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-953246 --alsologtostderr --binary-mirror http://127.0.0.1:39347 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-953246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-953246
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (94.47s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-537186 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-537186 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m30.749280805s)
helpers_test.go:175: Cleaning up "offline-docker-537186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-537186
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-537186: (3.722579749s)
--- PASS: TestOffline (94.47s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-816293
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-816293: exit status 85 (72.994917ms)

                                                
                                                
-- stdout --
	* Profile "addons-816293" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-816293"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-816293
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-816293: exit status 85 (70.86136ms)

                                                
                                                
-- stdout --
	* Profile "addons-816293" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-816293"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (222.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-816293 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-816293 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m42.038133242s)
--- PASS: TestAddons/Setup (222.04s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.14s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 47.390196ms
addons_test.go:835: volcano-scheduler stabilized in 47.562354ms
addons_test.go:851: volcano-controller stabilized in 47.87299ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-lkflm" [66364400-9ab7-47c6-b339-f9f71db0b03a] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004316562s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-fr4cs" [feadec3a-6b82-4588-bb47-91a647792233] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003493854s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-b295c" [6a42554b-fcf1-4e2b-b25f-2adeb223eb98] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.012872304s
addons_test.go:870: (dbg) Run:  kubectl --context addons-816293 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-816293 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-816293 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [59a438da-d42e-42de-bfec-bd68c32f0ac7] Pending
helpers_test.go:344: "test-job-nginx-0" [59a438da-d42e-42de-bfec-bd68c32f0ac7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [59a438da-d42e-42de-bfec-bd68c32f0ac7] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004372743s
addons_test.go:906: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-arm64 -p addons-816293 addons disable volcano --alsologtostderr -v=1: (10.483937171s)
--- PASS: TestAddons/serial/Volcano (41.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-816293 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-816293 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-816293 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-816293 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-816293 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ea94fc3b-91a3-465b-b7d6-855efe9f6496] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ea94fc3b-91a3-465b-b7d6-855efe9f6496] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003971451s
I0923 13:24:08.981909  720192 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-816293 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-816293 addons disable ingress-dns --alsologtostderr -v=1: (1.292948048s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-816293 addons disable ingress --alsologtostderr -v=1: (7.868152203s)
--- PASS: TestAddons/parallel/Ingress (19.99s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7v9cd" [d6a0eec7-6353-45b0-b40c-7d1b00387139] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004492668s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-816293
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-816293: (5.744842999s)
--- PASS: TestAddons/parallel/InspektorGadget (11.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.368728ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-v6k5c" [47fddf7e-71ac-4304-b3a5-52200b9e861f] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004683275s
addons_test.go:413: (dbg) Run:  kubectl --context addons-816293 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 13:23:46.478753  720192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 13:23:46.483716  720192 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 13:23:46.483745  720192 kapi.go:107] duration metric: took 7.672047ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.680621ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-816293 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-816293 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [05122192-241c-40fb-898d-75473eb6108d] Pending
helpers_test.go:344: "task-pv-pod" [05122192-241c-40fb-898d-75473eb6108d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [05122192-241c-40fb-898d-75473eb6108d] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.0038595s
addons_test.go:528: (dbg) Run:  kubectl --context addons-816293 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-816293 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-816293 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-816293 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-816293 delete pod task-pv-pod: (1.205339016s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-816293 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-816293 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-816293 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8f5f8f7e-9507-4a17-a98d-578c8a8cea97] Pending
helpers_test.go:344: "task-pv-pod-restore" [8f5f8f7e-9507-4a17-a98d-578c8a8cea97] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8f5f8f7e-9507-4a17-a98d-578c8a8cea97] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003748774s
addons_test.go:570: (dbg) Run:  kubectl --context addons-816293 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-816293 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-816293 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-816293 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.66534766s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-816293 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-qs7dv" [01af048f-7ca9-4c90-be81-0b6aa72c8b03] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-qs7dv" [01af048f-7ca9-4c90-be81-0b6aa72c8b03] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-qs7dv" [01af048f-7ca9-4c90-be81-0b6aa72c8b03] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003611043s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-816293 addons disable headlamp --alsologtostderr -v=1: (5.843581337s)
--- PASS: TestAddons/parallel/Headlamp (16.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-v58d6" [58f1fc54-4f9f-4d35-b2a2-07ab35644143] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003295378s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-816293
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.73s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-816293 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-816293 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-816293 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [eed10a60-7c4b-4367-a517-ae72f571f6c8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [eed10a60-7c4b-4367-a517-ae72f571f6c8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [eed10a60-7c4b-4367-a517-ae72f571f6c8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003037459s
addons_test.go:938: (dbg) Run:  kubectl --context addons-816293 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 ssh "cat /opt/local-path-provisioner/pvc-3f2fcd29-74af-42b3-bac1-c6876ced45a4_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-816293 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-816293 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-816293 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.421889295s)
--- PASS: TestAddons/parallel/LocalPath (52.73s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-95vmg" [0441bbd4-ba18-4999-88db-f008dcc67689] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.010349908s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-816293
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-k2dw5" [a90fbf96-6919-42a5-b645-10baf69360bc] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003557248s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-816293 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-816293 addons disable yakd --alsologtostderr -v=1: (5.768096997s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-816293
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-816293: (5.862652358s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-816293
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-816293
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-816293
--- PASS: TestAddons/StoppedEnableDisable (6.12s)

                                                
                                    
x
+
TestCertOptions (38.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-254319 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-254319 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (35.443763672s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-254319 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-254319 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-254319 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-254319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-254319
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-254319: (2.189499824s)
--- PASS: TestCertOptions (38.31s)

                                                
                                    
x
+
TestCertExpiration (251.61s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-591741 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-591741 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (40.015448059s)
E0923 14:02:51.159120  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-591741 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0923 14:05:54.226019  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-591741 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (29.499130896s)
helpers_test.go:175: Cleaning up "cert-expiration-591741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-591741
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-591741: (2.090925699s)
--- PASS: TestCertExpiration (251.61s)

                                                
                                    
x
+
TestDockerFlags (44.25s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-983021 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-983021 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.239195044s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-983021 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-983021 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-983021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-983021
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-983021: (2.236625694s)
--- PASS: TestDockerFlags (44.25s)

                                                
                                    
x
+
TestForceSystemdFlag (46.84s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-638978 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0923 14:01:38.989700  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-638978 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.503900577s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-638978 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-638978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-638978
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-638978: (3.924549314s)
--- PASS: TestForceSystemdFlag (46.84s)

                                                
                                    
x
+
TestForceSystemdEnv (46.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-460426 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-460426 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.248943072s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-460426 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-460426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-460426
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-460426: (2.440223659s)
--- PASS: TestForceSystemdEnv (46.22s)

                                                
                                    
x
+
TestErrorSpam/setup (35.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-730727 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-730727 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-730727 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-730727 --driver=docker  --container-runtime=docker: (35.433348116s)
--- PASS: TestErrorSpam/setup (35.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 pause
--- PASS: TestErrorSpam/pause (1.39s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (10.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 stop: (10.793313087s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-730727 --log_dir /tmp/nospam-730727 stop
--- PASS: TestErrorSpam/stop (10.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19690-714802/.minikube/files/etc/test/nested/copy/720192/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-863481 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-863481 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (45.040135273s)
--- PASS: TestFunctional/serial/StartWithProxy (45.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 13:26:19.653117  720192 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-863481 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-863481 --alsologtostderr -v=8: (33.629356477s)
functional_test.go:663: soft start took 33.632824245s for "functional-863481" cluster.
I0923 13:26:53.282861  720192 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (33.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-863481 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-863481 cache add registry.k8s.io/pause:3.1: (1.148339995s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-863481 cache add registry.k8s.io/pause:3.3: (1.229938834s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-863481 /tmp/TestFunctionalserialCacheCmdcacheadd_local1440313286/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 cache add minikube-local-cache-test:functional-863481
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 cache delete minikube-local-cache-test:functional-863481
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-863481
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-863481 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.133897ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 kubectl -- --context functional-863481 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-863481 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-863481 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-863481 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.802247011s)
functional_test.go:761: restart took 41.80239965s for "functional-863481" cluster.
I0923 13:27:42.347546  720192 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (41.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-863481 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-863481 logs: (1.126015029s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 logs --file /tmp/TestFunctionalserialLogsFileCmd3049451517/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-863481 logs --file /tmp/TestFunctionalserialLogsFileCmd3049451517/001/logs.txt: (1.128576112s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.61s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-863481 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-863481
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-863481: exit status 115 (449.045735ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32094 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-863481 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-863481 config get cpus: exit status 14 (110.005226ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-863481 config get cpus: exit status 14 (85.690659ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-863481 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-863481 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 760749: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-863481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-863481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (197.508254ms)

                                                
                                                
-- stdout --
	* [functional-863481] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:28:24.876327  760389 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:28:24.876509  760389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:28:24.876535  760389 out.go:358] Setting ErrFile to fd 2...
	I0923 13:28:24.876568  760389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:28:24.877367  760389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
	I0923 13:28:24.877881  760389 out.go:352] Setting JSON to false
	I0923 13:28:24.879128  760389 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11453,"bootTime":1727086652,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0923 13:28:24.879225  760389 start.go:139] virtualization:  
	I0923 13:28:24.882519  760389 out.go:177] * [functional-863481] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:28:24.886720  760389 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:28:24.886752  760389 notify.go:220] Checking for updates...
	I0923 13:28:24.890293  760389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:28:24.892144  760389 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig
	I0923 13:28:24.893898  760389 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube
	I0923 13:28:24.895864  760389 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:28:24.897387  760389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:28:24.899548  760389 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:28:24.901421  760389 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:28:24.927208  760389 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:28:24.927341  760389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:28:25.002000  760389 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:28:24.990739987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:28:25.002148  760389 docker.go:318] overlay module found
	I0923 13:28:25.005203  760389 out.go:177] * Using the docker driver based on existing profile
	I0923 13:28:25.011163  760389 start.go:297] selected driver: docker
	I0923 13:28:25.011205  760389 start.go:901] validating driver "docker" against &{Name:functional-863481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-863481 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:28:25.011331  760389 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:28:25.014061  760389 out.go:201] 
	W0923 13:28:25.016182  760389 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 13:28:25.018335  760389 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-863481 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-863481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-863481 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (178.702529ms)

                                                
                                                
-- stdout --
	* [functional-863481] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:28:24.706409  760345 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:28:24.706583  760345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:28:24.706595  760345 out.go:358] Setting ErrFile to fd 2...
	I0923 13:28:24.706602  760345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:28:24.707427  760345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
	I0923 13:28:24.707820  760345 out.go:352] Setting JSON to false
	I0923 13:28:24.708837  760345 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11453,"bootTime":1727086652,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0923 13:28:24.708919  760345 start.go:139] virtualization:  
	I0923 13:28:24.711874  760345 out.go:177] * [functional-863481] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0923 13:28:24.713876  760345 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:28:24.713927  760345 notify.go:220] Checking for updates...
	I0923 13:28:24.717619  760345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:28:24.719289  760345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig
	I0923 13:28:24.720935  760345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube
	I0923 13:28:24.722792  760345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:28:24.724491  760345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:28:24.726785  760345 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:28:24.727358  760345 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:28:24.754511  760345 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:28:24.754645  760345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:28:24.811351  760345 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:28:24.800979058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:28:24.811457  760345 docker.go:318] overlay module found
	I0923 13:28:24.813586  760345 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 13:28:24.815251  760345 start.go:297] selected driver: docker
	I0923 13:28:24.815267  760345 start.go:901] validating driver "docker" against &{Name:functional-863481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-863481 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:28:24.815390  760345 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:28:24.817706  760345 out.go:201] 
	W0923 13:28:24.819549  760345 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 13:28:24.821377  760345 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-863481 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-863481 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-lzwqc" [566c431f-f9a7-462b-a852-6e132f6bb303] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-lzwqc" [566c431f-f9a7-462b-a852-6e132f6bb303] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003931267s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32225
functional_test.go:1675: http://192.168.49.2:32225: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-lzwqc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32225
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d173cfb0-2cc7-4c1b-b0b7-8131560b37c4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007152518s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-863481 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-863481 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-863481 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-863481 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aae9a5de-46b3-4864-b1ab-162170a224b1] Pending
helpers_test.go:344: "sp-pod" [aae9a5de-46b3-4864-b1ab-162170a224b1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [aae9a5de-46b3-4864-b1ab-162170a224b1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004266626s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-863481 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-863481 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-863481 delete -f testdata/storage-provisioner/pod.yaml: (1.35558394s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-863481 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f9679557-a0ec-4dd8-98eb-b9a7c89b2ddd] Pending
helpers_test.go:344: "sp-pod" [f9679557-a0ec-4dd8-98eb-b9a7c89b2ddd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f9679557-a0ec-4dd8-98eb-b9a7c89b2ddd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003372586s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-863481 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.34s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh -n functional-863481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 cp functional-863481:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3847949940/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh -n functional-863481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh -n functional-863481 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/720192/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "sudo cat /etc/test/nested/copy/720192/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/720192.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "sudo cat /etc/ssl/certs/720192.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/720192.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "sudo cat /usr/share/ca-certificates/720192.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "sudo cat /etc/ssl/certs/51391683.0"
E0923 13:28:41.057141  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7201922.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "sudo cat /etc/ssl/certs/7201922.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7201922.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "sudo cat /usr/share/ca-certificates/7201922.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-863481 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-863481 ssh "sudo systemctl is-active crio": exit status 1 (328.671952ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-863481 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-863481 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-863481 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 757548: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-863481 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-863481 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-863481 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1e2a9c42-7edf-42e2-8409-158a2d629e21] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1e2a9c42-7edf-42e2-8409-158a2d629e21] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003932287s
I0923 13:28:00.632731  720192 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-863481 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.235.38 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-863481 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-863481 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-863481 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-lsbdf" [3c2d088c-19ef-4ca7-ad97-1a05bc767524] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-lsbdf" [3c2d088c-19ef-4ca7-ad97-1a05bc767524] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004160183s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "334.417777ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "68.296396ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "469.975249ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "50.503983ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-863481 /tmp/TestFunctionalparallelMountCmdany-port160892982/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727098101444058672" to /tmp/TestFunctionalparallelMountCmdany-port160892982/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727098101444058672" to /tmp/TestFunctionalparallelMountCmdany-port160892982/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727098101444058672" to /tmp/TestFunctionalparallelMountCmdany-port160892982/001/test-1727098101444058672
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 13:28 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 13:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 13:28 test-1727098101444058672
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh cat /mount-9p/test-1727098101444058672
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-863481 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7439ad44-aec2-4c01-9ea9-0c8da0df3b3f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7439ad44-aec2-4c01-9ea9-0c8da0df3b3f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7439ad44-aec2-4c01-9ea9-0c8da0df3b3f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004605877s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-863481 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-863481 /tmp/TestFunctionalparallelMountCmdany-port160892982/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 service list -o json
functional_test.go:1494: Took "675.056394ms" to run "out/minikube-linux-arm64 -p functional-863481 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30304
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30304
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-863481 /tmp/TestFunctionalparallelMountCmdspecific-port2992970393/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-863481 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (450.48666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 13:28:29.537992  720192 retry.go:31] will retry after 603.692451ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-863481 /tmp/TestFunctionalparallelMountCmdspecific-port2992970393/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-863481 ssh "sudo umount -f /mount-9p": exit status 1 (315.054135ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-863481 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-863481 /tmp/TestFunctionalparallelMountCmdspecific-port2992970393/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-863481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3293250487/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-863481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3293250487/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-863481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3293250487/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-863481 ssh "findmnt -T" /mount1: exit status 1 (871.448619ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 13:28:32.220865  720192 retry.go:31] will retry after 586.146085ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-863481 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-863481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3293250487/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-863481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3293250487/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-863481 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3293250487/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-863481 version -o=json --components: (1.083480391s)
--- PASS: TestFunctional/parallel/Version/components (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-863481 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-863481
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-863481
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-863481 image ls --format short --alsologtostderr:
I0923 13:28:42.195521  763514 out.go:345] Setting OutFile to fd 1 ...
I0923 13:28:42.195779  763514 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:28:42.195815  763514 out.go:358] Setting ErrFile to fd 2...
I0923 13:28:42.195849  763514 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:28:42.196202  763514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
I0923 13:28:42.197222  763514 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:28:42.197412  763514 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:28:42.198133  763514 cli_runner.go:164] Run: docker container inspect functional-863481 --format={{.State.Status}}
I0923 13:28:42.247846  763514 ssh_runner.go:195] Run: systemctl --version
I0923 13:28:42.247915  763514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-863481
I0923 13:28:42.275077  763514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/functional-863481/id_rsa Username:docker}
I0923 13:28:42.390087  763514 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-863481 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kicbase/echo-server               | functional-863481 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/library/minikube-local-cache-test | functional-863481 | f9bb6c966528d | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-863481 image ls --format table --alsologtostderr:
I0923 13:28:43.018922  763755 out.go:345] Setting OutFile to fd 1 ...
I0923 13:28:43.019134  763755 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:28:43.019146  763755 out.go:358] Setting ErrFile to fd 2...
I0923 13:28:43.019152  763755 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:28:43.019385  763755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
I0923 13:28:43.020039  763755 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:28:43.020162  763755 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:28:43.020627  763755 cli_runner.go:164] Run: docker container inspect functional-863481 --format={{.State.Status}}
I0923 13:28:43.040122  763755 ssh_runner.go:195] Run: systemctl --version
I0923 13:28:43.040182  763755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-863481
I0923 13:28:43.063170  763755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/functional-863481/id_rsa Username:docker}
I0923 13:28:43.158051  763755 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-863481 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-863481"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed6
56a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"f9bb6c966528d6ebf452cf115216a766f255335b0d324cd87054d60698983780","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-863481"],"size":"30"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e
6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"siz
e":"244000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-863481 image ls --format json --alsologtostderr:
I0923 13:28:42.781326  763685 out.go:345] Setting OutFile to fd 1 ...
I0923 13:28:42.781446  763685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:28:42.781456  763685 out.go:358] Setting ErrFile to fd 2...
I0923 13:28:42.781463  763685 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:28:42.781763  763685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
I0923 13:28:42.782420  763685 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:28:42.782582  763685 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:28:42.783091  763685 cli_runner.go:164] Run: docker container inspect functional-863481 --format={{.State.Status}}
I0923 13:28:42.800749  763685 ssh_runner.go:195] Run: systemctl --version
I0923 13:28:42.800820  763685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-863481
I0923 13:28:42.817837  763685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/functional-863481/id_rsa Username:docker}
I0923 13:28:42.914538  763685 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-863481 image ls --format yaml --alsologtostderr:
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-863481
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: f9bb6c966528d6ebf452cf115216a766f255335b0d324cd87054d60698983780
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-863481
size: "30"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-863481 image ls --format yaml --alsologtostderr:
I0923 13:28:42.519562  763613 out.go:345] Setting OutFile to fd 1 ...
I0923 13:28:42.519793  763613 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:28:42.519821  763613 out.go:358] Setting ErrFile to fd 2...
I0923 13:28:42.519842  763613 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:28:42.520105  763613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
I0923 13:28:42.520810  763613 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:28:42.523685  763613 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:28:42.525249  763613 cli_runner.go:164] Run: docker container inspect functional-863481 --format={{.State.Status}}
I0923 13:28:42.551021  763613 ssh_runner.go:195] Run: systemctl --version
I0923 13:28:42.551078  763613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-863481
I0923 13:28:42.571422  763613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/functional-863481/id_rsa Username:docker}
I0923 13:28:42.662922  763613 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-863481 ssh pgrep buildkitd: exit status 1 (351.345694ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image build -t localhost/my-image:functional-863481 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-863481 image build -t localhost/my-image:functional-863481 testdata/build --alsologtostderr: (3.121390295s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-863481 image build -t localhost/my-image:functional-863481 testdata/build --alsologtostderr:
I0923 13:28:42.683033  763662 out.go:345] Setting OutFile to fd 1 ...
I0923 13:28:42.684633  763662 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:28:42.684658  763662 out.go:358] Setting ErrFile to fd 2...
I0923 13:28:42.684665  763662 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:28:42.685018  763662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
I0923 13:28:42.685760  763662 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:28:42.686522  763662 config.go:182] Loaded profile config "functional-863481": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 13:28:42.687035  763662 cli_runner.go:164] Run: docker container inspect functional-863481 --format={{.State.Status}}
I0923 13:28:42.717450  763662 ssh_runner.go:195] Run: systemctl --version
I0923 13:28:42.717500  763662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-863481
I0923 13:28:42.747274  763662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33538 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/functional-863481/id_rsa Username:docker}
I0923 13:28:42.842148  763662 build_images.go:161] Building image from path: /tmp/build.796914809.tar
I0923 13:28:42.842212  763662 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 13:28:42.857582  763662 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.796914809.tar
I0923 13:28:42.861193  763662 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.796914809.tar: stat -c "%s %y" /var/lib/minikube/build/build.796914809.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.796914809.tar': No such file or directory
I0923 13:28:42.861220  763662 ssh_runner.go:362] scp /tmp/build.796914809.tar --> /var/lib/minikube/build/build.796914809.tar (3072 bytes)
I0923 13:28:42.889530  763662 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.796914809
I0923 13:28:42.899292  763662 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.796914809 -xf /var/lib/minikube/build/build.796914809.tar
I0923 13:28:42.909318  763662 docker.go:360] Building image: /var/lib/minikube/build/build.796914809
I0923 13:28:42.909397  763662 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-863481 /var/lib/minikube/build/build.796914809
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:32403222e59fbafe8f12ae48d2cd1e7bc5779bfd90ff4fd23645c8cc32cbaff8 done
#8 naming to localhost/my-image:functional-863481 done
#8 DONE 0.1s
I0923 13:28:45.717624  763662 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-863481 /var/lib/minikube/build/build.796914809: (2.808198859s)
I0923 13:28:45.717690  763662 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.796914809
I0923 13:28:45.727249  763662 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.796914809.tar
I0923 13:28:45.736431  763662 build_images.go:217] Built localhost/my-image:functional-863481 from /tmp/build.796914809.tar
I0923 13:28:45.736463  763662 build_images.go:133] succeeded building to: functional-863481
I0923 13:28:45.736469  763662 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-863481
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image load --daemon kicbase/echo-server:functional-863481 --alsologtostderr
E0923 13:28:35.920784  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:28:35.927345  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:28:35.938677  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:28:35.960368  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:28:36.004964  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:28:36.086847  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image ls
E0923 13:28:36.248877  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image load --daemon kicbase/echo-server:functional-863481 --alsologtostderr
E0923 13:28:36.570212  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
E0923 13:28:37.213124  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-863481
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image load --daemon kicbase/echo-server:functional-863481 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image ls
2024/09/23 13:28:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image save kicbase/echo-server:functional-863481 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-863481 docker-env) && out/minikube-linux-arm64 status -p functional-863481"
E0923 13:28:38.494995  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-863481 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image rm kicbase/echo-server:functional-863481 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-863481
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-863481 image save --daemon kicbase/echo-server:functional-863481 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-863481
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-863481
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-863481
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-863481
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (127.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-524936 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0923 13:28:56.421063  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:29:16.902398  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:29:57.863782  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-524936 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m6.545459334s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (127.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-524936 -- rollout status deployment/busybox: (5.704576581s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-db7sn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-mm7pl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-twvl6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-db7sn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-mm7pl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-twvl6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-db7sn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-mm7pl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-twvl6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-db7sn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-db7sn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-mm7pl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-mm7pl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-twvl6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-524936 -- exec busybox-7dff88458-twvl6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-524936 -v=7 --alsologtostderr
E0923 13:31:19.785170  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-524936 -v=7 --alsologtostderr: (25.414856083s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr: (1.028221522s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-524936 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.009161584s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-524936 status --output json -v=7 --alsologtostderr: (1.006060419s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp testdata/cp-test.txt ha-524936:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile190734983/001/cp-test_ha-524936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936:/home/docker/cp-test.txt ha-524936-m02:/home/docker/cp-test_ha-524936_ha-524936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m02 "sudo cat /home/docker/cp-test_ha-524936_ha-524936-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936:/home/docker/cp-test.txt ha-524936-m03:/home/docker/cp-test_ha-524936_ha-524936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m03 "sudo cat /home/docker/cp-test_ha-524936_ha-524936-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936:/home/docker/cp-test.txt ha-524936-m04:/home/docker/cp-test_ha-524936_ha-524936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m04 "sudo cat /home/docker/cp-test_ha-524936_ha-524936-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp testdata/cp-test.txt ha-524936-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile190734983/001/cp-test_ha-524936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m02:/home/docker/cp-test.txt ha-524936:/home/docker/cp-test_ha-524936-m02_ha-524936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936 "sudo cat /home/docker/cp-test_ha-524936-m02_ha-524936.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m02:/home/docker/cp-test.txt ha-524936-m03:/home/docker/cp-test_ha-524936-m02_ha-524936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m03 "sudo cat /home/docker/cp-test_ha-524936-m02_ha-524936-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m02:/home/docker/cp-test.txt ha-524936-m04:/home/docker/cp-test_ha-524936-m02_ha-524936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m04 "sudo cat /home/docker/cp-test_ha-524936-m02_ha-524936-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp testdata/cp-test.txt ha-524936-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile190734983/001/cp-test_ha-524936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m03:/home/docker/cp-test.txt ha-524936:/home/docker/cp-test_ha-524936-m03_ha-524936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936 "sudo cat /home/docker/cp-test_ha-524936-m03_ha-524936.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m03:/home/docker/cp-test.txt ha-524936-m02:/home/docker/cp-test_ha-524936-m03_ha-524936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m02 "sudo cat /home/docker/cp-test_ha-524936-m03_ha-524936-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m03:/home/docker/cp-test.txt ha-524936-m04:/home/docker/cp-test_ha-524936-m03_ha-524936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m04 "sudo cat /home/docker/cp-test_ha-524936-m03_ha-524936-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp testdata/cp-test.txt ha-524936-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile190734983/001/cp-test_ha-524936-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m04:/home/docker/cp-test.txt ha-524936:/home/docker/cp-test_ha-524936-m04_ha-524936.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936 "sudo cat /home/docker/cp-test_ha-524936-m04_ha-524936.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m04:/home/docker/cp-test.txt ha-524936-m02:/home/docker/cp-test_ha-524936-m04_ha-524936-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m02 "sudo cat /home/docker/cp-test_ha-524936-m04_ha-524936-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 cp ha-524936-m04:/home/docker/cp-test.txt ha-524936-m03:/home/docker/cp-test_ha-524936-m04_ha-524936-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 ssh -n ha-524936-m03 "sudo cat /home/docker/cp-test_ha-524936-m04_ha-524936-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-524936 node stop m02 -v=7 --alsologtostderr: (11.007419011s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr: exit status 7 (741.139017ms)

                                                
                                                
-- stdout --
	ha-524936
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-524936-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-524936-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-524936-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:32:05.084264  785939 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:32:05.084411  785939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:32:05.084421  785939 out.go:358] Setting ErrFile to fd 2...
	I0923 13:32:05.084426  785939 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:32:05.084739  785939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
	I0923 13:32:05.085035  785939 out.go:352] Setting JSON to false
	I0923 13:32:05.085082  785939 mustload.go:65] Loading cluster: ha-524936
	I0923 13:32:05.085201  785939 notify.go:220] Checking for updates...
	I0923 13:32:05.085612  785939 config.go:182] Loaded profile config "ha-524936": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:32:05.085632  785939 status.go:174] checking status of ha-524936 ...
	I0923 13:32:05.086656  785939 cli_runner.go:164] Run: docker container inspect ha-524936 --format={{.State.Status}}
	I0923 13:32:05.108173  785939 status.go:364] ha-524936 host status = "Running" (err=<nil>)
	I0923 13:32:05.108201  785939 host.go:66] Checking if "ha-524936" exists ...
	I0923 13:32:05.108505  785939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-524936
	I0923 13:32:05.141639  785939 host.go:66] Checking if "ha-524936" exists ...
	I0923 13:32:05.142053  785939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:32:05.142134  785939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-524936
	I0923 13:32:05.164098  785939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/ha-524936/id_rsa Username:docker}
	I0923 13:32:05.258520  785939 ssh_runner.go:195] Run: systemctl --version
	I0923 13:32:05.262851  785939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:32:05.275383  785939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:32:05.325181  785939 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-23 13:32:05.314129617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:32:05.325810  785939 kubeconfig.go:125] found "ha-524936" server: "https://192.168.49.254:8443"
	I0923 13:32:05.325873  785939 api_server.go:166] Checking apiserver status ...
	I0923 13:32:05.325943  785939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:32:05.337933  785939 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2357/cgroup
	I0923 13:32:05.348207  785939 api_server.go:182] apiserver freezer: "12:freezer:/docker/8505ad00c1f0021f243e088cc1d8fea86ebe89706fb9179b83136a77de92ed52/kubepods/burstable/poda58a7de2d51182e262cf74d524229514/bf4fb625f0addd3a60d4b91a57f77c4f3673ba8be9f3827cff13ed2bc3428004"
	I0923 13:32:05.348277  785939 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8505ad00c1f0021f243e088cc1d8fea86ebe89706fb9179b83136a77de92ed52/kubepods/burstable/poda58a7de2d51182e262cf74d524229514/bf4fb625f0addd3a60d4b91a57f77c4f3673ba8be9f3827cff13ed2bc3428004/freezer.state
	I0923 13:32:05.358188  785939 api_server.go:204] freezer state: "THAWED"
	I0923 13:32:05.358223  785939 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 13:32:05.367866  785939 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 13:32:05.367896  785939 status.go:456] ha-524936 apiserver status = Running (err=<nil>)
	I0923 13:32:05.367930  785939 status.go:176] ha-524936 status: &{Name:ha-524936 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:32:05.367948  785939 status.go:174] checking status of ha-524936-m02 ...
	I0923 13:32:05.368284  785939 cli_runner.go:164] Run: docker container inspect ha-524936-m02 --format={{.State.Status}}
	I0923 13:32:05.387331  785939 status.go:364] ha-524936-m02 host status = "Stopped" (err=<nil>)
	I0923 13:32:05.387357  785939 status.go:377] host is not running, skipping remaining checks
	I0923 13:32:05.387365  785939 status.go:176] ha-524936-m02 status: &{Name:ha-524936-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:32:05.387391  785939 status.go:174] checking status of ha-524936-m03 ...
	I0923 13:32:05.387718  785939 cli_runner.go:164] Run: docker container inspect ha-524936-m03 --format={{.State.Status}}
	I0923 13:32:05.405505  785939 status.go:364] ha-524936-m03 host status = "Running" (err=<nil>)
	I0923 13:32:05.405532  785939 host.go:66] Checking if "ha-524936-m03" exists ...
	I0923 13:32:05.405853  785939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-524936-m03
	I0923 13:32:05.427168  785939 host.go:66] Checking if "ha-524936-m03" exists ...
	I0923 13:32:05.427506  785939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:32:05.427549  785939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-524936-m03
	I0923 13:32:05.447144  785939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33553 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/ha-524936-m03/id_rsa Username:docker}
	I0923 13:32:05.543056  785939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:32:05.556470  785939 kubeconfig.go:125] found "ha-524936" server: "https://192.168.49.254:8443"
	I0923 13:32:05.556502  785939 api_server.go:166] Checking apiserver status ...
	I0923 13:32:05.556544  785939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:32:05.568523  785939 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2156/cgroup
	I0923 13:32:05.581779  785939 api_server.go:182] apiserver freezer: "12:freezer:/docker/a18c17a0f2583a6bd676922131b3cb63fe2ecd5ff8a28b276719931349679f9b/kubepods/burstable/podae6e9d921ded2026890353b8d2d8b0d3/8e85c49acd036e00a81653cb8ff1ab988fd676e466bd1d289440a229a348d4f0"
	I0923 13:32:05.581872  785939 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a18c17a0f2583a6bd676922131b3cb63fe2ecd5ff8a28b276719931349679f9b/kubepods/burstable/podae6e9d921ded2026890353b8d2d8b0d3/8e85c49acd036e00a81653cb8ff1ab988fd676e466bd1d289440a229a348d4f0/freezer.state
	I0923 13:32:05.591328  785939 api_server.go:204] freezer state: "THAWED"
	I0923 13:32:05.591360  785939 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 13:32:05.599665  785939 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 13:32:05.599701  785939 status.go:456] ha-524936-m03 apiserver status = Running (err=<nil>)
	I0923 13:32:05.599716  785939 status.go:176] ha-524936-m03 status: &{Name:ha-524936-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:32:05.599745  785939 status.go:174] checking status of ha-524936-m04 ...
	I0923 13:32:05.600102  785939 cli_runner.go:164] Run: docker container inspect ha-524936-m04 --format={{.State.Status}}
	I0923 13:32:05.622018  785939 status.go:364] ha-524936-m04 host status = "Running" (err=<nil>)
	I0923 13:32:05.622059  785939 host.go:66] Checking if "ha-524936-m04" exists ...
	I0923 13:32:05.622760  785939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-524936-m04
	I0923 13:32:05.642296  785939 host.go:66] Checking if "ha-524936-m04" exists ...
	I0923 13:32:05.642616  785939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:32:05.642663  785939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-524936-m04
	I0923 13:32:05.660086  785939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/ha-524936-m04/id_rsa Username:docker}
	I0923 13:32:05.754138  785939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:32:05.766330  785939 status.go:176] ha-524936-m04 status: &{Name:ha-524936-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 node start m02 -v=7 --alsologtostderr
E0923 13:32:51.158613  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:32:51.165162  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:32:51.176643  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:32:51.198077  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:32:51.239584  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:32:51.321061  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:32:51.482685  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:32:51.804415  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:32:52.445976  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:32:53.728134  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-524936 node start m02 -v=7 --alsologtostderr: (48.522427318s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr: (1.196410741s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
E0923 13:32:56.290281  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (49.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.025424059s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (252.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-524936 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-524936 -v=7 --alsologtostderr
E0923 13:33:01.412019  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:33:11.653585  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-524936 -v=7 --alsologtostderr: (34.303737772s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-524936 --wait=true -v=7 --alsologtostderr
E0923 13:33:32.135829  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:33:35.917409  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:34:03.626908  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:34:13.097144  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:35:35.018448  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-524936 --wait=true -v=7 --alsologtostderr: (3m38.486142939s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-524936
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (252.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-524936 node delete m03 -v=7 --alsologtostderr: (10.203251994s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 stop -v=7 --alsologtostderr
E0923 13:37:51.159820  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-524936 stop -v=7 --alsologtostderr: (33.015605496s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr: exit status 7 (127.318998ms)

                                                
                                                
-- stdout --
	ha-524936
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-524936-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-524936-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:37:55.404673  813899 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:37:55.404808  813899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:37:55.404816  813899 out.go:358] Setting ErrFile to fd 2...
	I0923 13:37:55.404821  813899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:37:55.405102  813899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
	I0923 13:37:55.405291  813899 out.go:352] Setting JSON to false
	I0923 13:37:55.405332  813899 mustload.go:65] Loading cluster: ha-524936
	I0923 13:37:55.405449  813899 notify.go:220] Checking for updates...
	I0923 13:37:55.405826  813899 config.go:182] Loaded profile config "ha-524936": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:37:55.405843  813899 status.go:174] checking status of ha-524936 ...
	I0923 13:37:55.406721  813899 cli_runner.go:164] Run: docker container inspect ha-524936 --format={{.State.Status}}
	I0923 13:37:55.427094  813899 status.go:364] ha-524936 host status = "Stopped" (err=<nil>)
	I0923 13:37:55.427119  813899 status.go:377] host is not running, skipping remaining checks
	I0923 13:37:55.427126  813899 status.go:176] ha-524936 status: &{Name:ha-524936 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:37:55.427159  813899 status.go:174] checking status of ha-524936-m02 ...
	I0923 13:37:55.427478  813899 cli_runner.go:164] Run: docker container inspect ha-524936-m02 --format={{.State.Status}}
	I0923 13:37:55.458745  813899 status.go:364] ha-524936-m02 host status = "Stopped" (err=<nil>)
	I0923 13:37:55.458767  813899 status.go:377] host is not running, skipping remaining checks
	I0923 13:37:55.458774  813899 status.go:176] ha-524936-m02 status: &{Name:ha-524936-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:37:55.458793  813899 status.go:174] checking status of ha-524936-m04 ...
	I0923 13:37:55.459098  813899 cli_runner.go:164] Run: docker container inspect ha-524936-m04 --format={{.State.Status}}
	I0923 13:37:55.476708  813899 status.go:364] ha-524936-m04 host status = "Stopped" (err=<nil>)
	I0923 13:37:55.476730  813899 status.go:377] host is not running, skipping remaining checks
	I0923 13:37:55.476737  813899 status.go:176] ha-524936-m04 status: &{Name:ha-524936-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (98.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-524936 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0923 13:38:18.860034  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:38:35.917880  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-524936 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m37.578613392s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (98.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-524936 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-524936 --control-plane -v=7 --alsologtostderr: (45.907862696s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-524936 status -v=7 --alsologtostderr: (1.002759361s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.015097403s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-698559 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-698559 --driver=docker  --container-runtime=docker: (34.700241498s)
--- PASS: TestImageBuild/serial/Setup (34.70s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-698559
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-698559: (1.809355446s)
--- PASS: TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-698559
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.00s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-698559
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-698559
image_test.go:88: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-698559: (1.05162082s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-364709 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-364709 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (39.020540106s)
--- PASS: TestJSONOutput/start/Command (39.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.92s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-364709 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.92s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-364709 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-364709 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-364709 --output=json --user=testUser: (5.838928234s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-259569 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-259569 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.285618ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dbb7efa6-6c87-4f7f-b5fe-bfcc2561034d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-259569] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4c6c3830-af0d-4e05-aac3-ad552ab1d943","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"8fc7cee2-76a1-49b1-b164-384d1c57f998","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0c2d8c6e-9e85-48cf-8bc6-fc3dd1b30388","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig"}}
	{"specversion":"1.0","id":"4f1e5d7b-971f-4116-92b1-43a4418a3f43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube"}}
	{"specversion":"1.0","id":"656dd821-c1f2-48de-8073-20a0784b67fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"944cc99b-e198-4d9a-ba91-ba27997bead6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c84adb9d-f596-4eb0-8166-436b0a411e7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-259569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-259569
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-248364 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-248364 --network=: (29.719925139s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-248364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-248364
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-248364: (2.116637469s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.85s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-072346 --network=bridge
E0923 13:42:51.159872  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-072346 --network=bridge: (31.106628707s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-072346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-072346
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-072346: (1.967720884s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.10s)

                                                
                                    
x
+
TestKicExistingNetwork (31.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0923 13:43:06.092819  720192 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 13:43:06.109094  720192 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 13:43:06.109169  720192 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0923 13:43:06.109188  720192 cli_runner.go:164] Run: docker network inspect existing-network
W0923 13:43:06.125057  720192 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0923 13:43:06.125093  720192 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0923 13:43:06.125110  720192 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0923 13:43:06.125215  720192 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 13:43:06.143683  720192 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f087341914e2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:4c:d9:6e} reservation:<nil>}
I0923 13:43:06.144162  720192 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017e74c0}
I0923 13:43:06.144189  720192 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0923 13:43:06.144238  720192 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0923 13:43:06.212830  720192 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-860083 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-860083 --network=existing-network: (29.534197005s)
helpers_test.go:175: Cleaning up "existing-network-860083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-860083
E0923 13:43:35.917823  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-860083: (1.976681882s)
I0923 13:43:37.741817  720192 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.67s)

                                                
                                    
x
+
TestKicCustomSubnet (34.77s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-156749 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-156749 --subnet=192.168.60.0/24: (32.982407966s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-156749 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-156749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-156749
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-156749: (1.761981537s)
--- PASS: TestKicCustomSubnet (34.77s)

                                                
                                    
x
+
TestKicStaticIP (34.92s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-516300 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-516300 --static-ip=192.168.200.200: (32.559913435s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-516300 ip
helpers_test.go:175: Cleaning up "static-ip-516300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-516300
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-516300: (2.104828006s)
--- PASS: TestKicStaticIP (34.92s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-427672 --driver=docker  --container-runtime=docker
E0923 13:44:58.988264  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-427672 --driver=docker  --container-runtime=docker: (32.366240362s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-430216 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-430216 --driver=docker  --container-runtime=docker: (32.014852309s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-427672
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-430216
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-430216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-430216
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-430216: (2.134429966s)
helpers_test.go:175: Cleaning up "first-427672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-427672
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-427672: (2.134848422s)
--- PASS: TestMinikubeProfile (70.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-841400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-841400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.776250685s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-841400 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-843599 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-843599 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.899427381s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-843599 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-841400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-841400 --alsologtostderr -v=5: (1.482332908s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-843599 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-843599
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-843599: (1.213778252s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-843599
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-843599: (7.088945088s)
--- PASS: TestMountStart/serial/RestartStopped (8.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-843599 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-415381 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-415381 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m22.860683767s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (50.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
E0923 13:47:51.158286  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-415381 -- rollout status deployment/busybox: (4.775162794s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 13:47:56.272396  720192 retry.go:31] will retry after 1.454929471s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 13:47:57.881073  720192 retry.go:31] will retry after 1.316581496s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 13:47:59.348330  720192 retry.go:31] will retry after 2.283015442s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 13:48:01.781548  720192 retry.go:31] will retry after 2.777672695s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 13:48:04.719820  720192 retry.go:31] will retry after 3.595537937s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 13:48:08.468423  720192 retry.go:31] will retry after 6.099622959s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 13:48:14.720173  720192 retry.go:31] will retry after 11.32925115s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0923 13:48:26.254370  720192 retry.go:31] will retry after 13.764443361s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0923 13:48:35.917483  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- exec busybox-7dff88458-rqrk5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- exec busybox-7dff88458-s4tc7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- exec busybox-7dff88458-rqrk5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- exec busybox-7dff88458-s4tc7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- exec busybox-7dff88458-rqrk5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- exec busybox-7dff88458-s4tc7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (50.71s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- exec busybox-7dff88458-rqrk5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- exec busybox-7dff88458-rqrk5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- exec busybox-7dff88458-s4tc7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-415381 -- exec busybox-7dff88458-s4tc7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.15s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-415381 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-415381 -v 3 --alsologtostderr: (17.703987647s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-415381 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp testdata/cp-test.txt multinode-415381:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp multinode-415381:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1528949885/001/cp-test_multinode-415381.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp multinode-415381:/home/docker/cp-test.txt multinode-415381-m02:/home/docker/cp-test_multinode-415381_multinode-415381-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m02 "sudo cat /home/docker/cp-test_multinode-415381_multinode-415381-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp multinode-415381:/home/docker/cp-test.txt multinode-415381-m03:/home/docker/cp-test_multinode-415381_multinode-415381-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m03 "sudo cat /home/docker/cp-test_multinode-415381_multinode-415381-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp testdata/cp-test.txt multinode-415381-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp multinode-415381-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1528949885/001/cp-test_multinode-415381-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp multinode-415381-m02:/home/docker/cp-test.txt multinode-415381:/home/docker/cp-test_multinode-415381-m02_multinode-415381.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381 "sudo cat /home/docker/cp-test_multinode-415381-m02_multinode-415381.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp multinode-415381-m02:/home/docker/cp-test.txt multinode-415381-m03:/home/docker/cp-test_multinode-415381-m02_multinode-415381-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m03 "sudo cat /home/docker/cp-test_multinode-415381-m02_multinode-415381-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp testdata/cp-test.txt multinode-415381-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp multinode-415381-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1528949885/001/cp-test_multinode-415381-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp multinode-415381-m03:/home/docker/cp-test.txt multinode-415381:/home/docker/cp-test_multinode-415381-m03_multinode-415381.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381 "sudo cat /home/docker/cp-test_multinode-415381-m03_multinode-415381.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 cp multinode-415381-m03:/home/docker/cp-test.txt multinode-415381-m02:/home/docker/cp-test_multinode-415381-m03_multinode-415381-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 ssh -n multinode-415381-m02 "sudo cat /home/docker/cp-test_multinode-415381-m03_multinode-415381-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-415381 node stop m03: (1.211354048s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-415381 status: exit status 7 (523.554582ms)

                                                
                                                
-- stdout --
	multinode-415381
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-415381-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-415381-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 status --alsologtostderr
E0923 13:49:14.224716  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-415381 status --alsologtostderr: exit status 7 (502.529518ms)

                                                
                                                
-- stdout --
	multinode-415381
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-415381-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-415381-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:49:14.196136  888566 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:49:14.196251  888566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:49:14.196261  888566 out.go:358] Setting ErrFile to fd 2...
	I0923 13:49:14.196267  888566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:49:14.196498  888566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
	I0923 13:49:14.196707  888566 out.go:352] Setting JSON to false
	I0923 13:49:14.196748  888566 mustload.go:65] Loading cluster: multinode-415381
	I0923 13:49:14.196845  888566 notify.go:220] Checking for updates...
	I0923 13:49:14.197210  888566 config.go:182] Loaded profile config "multinode-415381": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:49:14.197230  888566 status.go:174] checking status of multinode-415381 ...
	I0923 13:49:14.198125  888566 cli_runner.go:164] Run: docker container inspect multinode-415381 --format={{.State.Status}}
	I0923 13:49:14.215932  888566 status.go:364] multinode-415381 host status = "Running" (err=<nil>)
	I0923 13:49:14.215958  888566 host.go:66] Checking if "multinode-415381" exists ...
	I0923 13:49:14.216262  888566 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-415381
	I0923 13:49:14.241016  888566 host.go:66] Checking if "multinode-415381" exists ...
	I0923 13:49:14.241332  888566 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:49:14.241384  888566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-415381
	I0923 13:49:14.258924  888566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33668 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/multinode-415381/id_rsa Username:docker}
	I0923 13:49:14.354564  888566 ssh_runner.go:195] Run: systemctl --version
	I0923 13:49:14.359283  888566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:49:14.371661  888566 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:49:14.422727  888566 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-23 13:49:14.410854754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:49:14.423342  888566 kubeconfig.go:125] found "multinode-415381" server: "https://192.168.67.2:8443"
	I0923 13:49:14.423386  888566 api_server.go:166] Checking apiserver status ...
	I0923 13:49:14.423426  888566 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:49:14.435612  888566 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2168/cgroup
	I0923 13:49:14.445220  888566 api_server.go:182] apiserver freezer: "12:freezer:/docker/c9188d64724cfa065d51579396b805294b8cca1d76724dd8f38c17a40412d46b/kubepods/burstable/pod2f23632f5677d2cb66b570381eda50df/0634038c9be32453cd0df5a9c1b6e2399ae03bbb734dce079a5e8d20c7cde474"
	I0923 13:49:14.445306  888566 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c9188d64724cfa065d51579396b805294b8cca1d76724dd8f38c17a40412d46b/kubepods/burstable/pod2f23632f5677d2cb66b570381eda50df/0634038c9be32453cd0df5a9c1b6e2399ae03bbb734dce079a5e8d20c7cde474/freezer.state
	I0923 13:49:14.454273  888566 api_server.go:204] freezer state: "THAWED"
	I0923 13:49:14.454308  888566 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0923 13:49:14.463210  888566 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0923 13:49:14.463256  888566 status.go:456] multinode-415381 apiserver status = Running (err=<nil>)
	I0923 13:49:14.463268  888566 status.go:176] multinode-415381 status: &{Name:multinode-415381 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:49:14.463325  888566 status.go:174] checking status of multinode-415381-m02 ...
	I0923 13:49:14.463674  888566 cli_runner.go:164] Run: docker container inspect multinode-415381-m02 --format={{.State.Status}}
	I0923 13:49:14.482173  888566 status.go:364] multinode-415381-m02 host status = "Running" (err=<nil>)
	I0923 13:49:14.482198  888566 host.go:66] Checking if "multinode-415381-m02" exists ...
	I0923 13:49:14.482511  888566 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-415381-m02
	I0923 13:49:14.499706  888566 host.go:66] Checking if "multinode-415381-m02" exists ...
	I0923 13:49:14.500049  888566 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:49:14.500145  888566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-415381-m02
	I0923 13:49:14.517148  888566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33673 SSHKeyPath:/home/jenkins/minikube-integration/19690-714802/.minikube/machines/multinode-415381-m02/id_rsa Username:docker}
	I0923 13:49:14.610313  888566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:49:14.622309  888566 status.go:176] multinode-415381-m02 status: &{Name:multinode-415381-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:49:14.622346  888566 status.go:174] checking status of multinode-415381-m03 ...
	I0923 13:49:14.622658  888566 cli_runner.go:164] Run: docker container inspect multinode-415381-m03 --format={{.State.Status}}
	I0923 13:49:14.639327  888566 status.go:364] multinode-415381-m03 host status = "Stopped" (err=<nil>)
	I0923 13:49:14.639354  888566 status.go:377] host is not running, skipping remaining checks
	I0923 13:49:14.639362  888566 status.go:176] multinode-415381-m03 status: &{Name:multinode-415381-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-415381 node start m03 -v=7 --alsologtostderr: (10.184793934s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (104.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-415381
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-415381
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-415381: (22.556694678s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-415381 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-415381 --wait=true -v=8 --alsologtostderr: (1m21.607076677s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-415381
--- PASS: TestMultiNode/serial/RestartKeepsNodes (104.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-415381 node delete m03: (5.072521748s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-415381 stop: (21.561988301s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-415381 status: exit status 7 (99.346811ms)

                                                
                                                
-- stdout --
	multinode-415381
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-415381-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-415381 status --alsologtostderr: exit status 7 (91.308206ms)

                                                
                                                
-- stdout --
	multinode-415381
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-415381-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:51:37.432323  901933 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:51:37.432518  901933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:51:37.432548  901933 out.go:358] Setting ErrFile to fd 2...
	I0923 13:51:37.432569  901933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:51:37.432819  901933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-714802/.minikube/bin
	I0923 13:51:37.433071  901933 out.go:352] Setting JSON to false
	I0923 13:51:37.433138  901933 mustload.go:65] Loading cluster: multinode-415381
	I0923 13:51:37.433208  901933 notify.go:220] Checking for updates...
	I0923 13:51:37.433592  901933 config.go:182] Loaded profile config "multinode-415381": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:51:37.433628  901933 status.go:174] checking status of multinode-415381 ...
	I0923 13:51:37.434232  901933 cli_runner.go:164] Run: docker container inspect multinode-415381 --format={{.State.Status}}
	I0923 13:51:37.452926  901933 status.go:364] multinode-415381 host status = "Stopped" (err=<nil>)
	I0923 13:51:37.453060  901933 status.go:377] host is not running, skipping remaining checks
	I0923 13:51:37.453074  901933 status.go:176] multinode-415381 status: &{Name:multinode-415381 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:51:37.453113  901933 status.go:174] checking status of multinode-415381-m02 ...
	I0923 13:51:37.453444  901933 cli_runner.go:164] Run: docker container inspect multinode-415381-m02 --format={{.State.Status}}
	I0923 13:51:37.481261  901933 status.go:364] multinode-415381-m02 host status = "Stopped" (err=<nil>)
	I0923 13:51:37.481284  901933 status.go:377] host is not running, skipping remaining checks
	I0923 13:51:37.481292  901933 status.go:176] multinode-415381-m02 status: &{Name:multinode-415381-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-415381 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-415381 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (56.286706618s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-415381 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-415381
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-415381-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-415381-m02 --driver=docker  --container-runtime=docker: exit status 14 (100.148546ms)

                                                
                                                
-- stdout --
	* [multinode-415381-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-415381-m02' is duplicated with machine name 'multinode-415381-m02' in profile 'multinode-415381'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-415381-m03 --driver=docker  --container-runtime=docker
E0923 13:52:51.159812  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-415381-m03 --driver=docker  --container-runtime=docker: (32.40609456s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-415381
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-415381: exit status 80 (305.885995ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-415381 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-415381-m03 already exists in multinode-415381-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-415381-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-415381-m03: (2.102456455s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.97s)

                                                
                                    
x
+
TestPreload (153.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-195709 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0923 13:53:35.918042  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-195709 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m43.75649329s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-195709 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-195709 image pull gcr.io/k8s-minikube/busybox: (2.090563764s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-195709
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-195709: (10.90572914s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-195709 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-195709 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (33.900541332s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-195709 image list
helpers_test.go:175: Cleaning up "test-preload-195709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-195709
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-195709: (2.248972313s)
--- PASS: TestPreload (153.12s)

                                                
                                    
x
+
TestScheduledStopUnix (105.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-184333 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-184333 --memory=2048 --driver=docker  --container-runtime=docker: (32.203717456s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-184333 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-184333 -n scheduled-stop-184333
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-184333 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0923 13:56:19.285347  720192 retry.go:31] will retry after 68.188µs: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.286516  720192 retry.go:31] will retry after 117.351µs: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.286866  720192 retry.go:31] will retry after 313.793µs: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.287640  720192 retry.go:31] will retry after 266.282µs: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.288765  720192 retry.go:31] will retry after 738.783µs: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.289897  720192 retry.go:31] will retry after 539.891µs: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.291033  720192 retry.go:31] will retry after 1.436426ms: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.293199  720192 retry.go:31] will retry after 1.968497ms: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.295404  720192 retry.go:31] will retry after 3.464564ms: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.299601  720192 retry.go:31] will retry after 3.861663ms: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.303886  720192 retry.go:31] will retry after 8.48458ms: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.313279  720192 retry.go:31] will retry after 9.868503ms: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.323560  720192 retry.go:31] will retry after 16.562864ms: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.341509  720192 retry.go:31] will retry after 13.479612ms: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.356403  720192 retry.go:31] will retry after 23.759303ms: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
I0923 13:56:19.380636  720192 retry.go:31] will retry after 33.710524ms: open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/scheduled-stop-184333/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-184333 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-184333 -n scheduled-stop-184333
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-184333
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-184333 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-184333
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-184333: exit status 7 (65.478062ms)

                                                
                                                
-- stdout --
	scheduled-stop-184333
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-184333 -n scheduled-stop-184333
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-184333 -n scheduled-stop-184333: exit status 7 (66.828433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-184333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-184333
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-184333: (1.668033333s)
--- PASS: TestScheduledStopUnix (105.55s)

                                                
                                    
x
+
TestSkaffold (117.8s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3118769196 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-678946 --memory=2600 --driver=docker  --container-runtime=docker
E0923 13:57:51.158485  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-678946 --memory=2600 --driver=docker  --container-runtime=docker: (31.120425723s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3118769196 run --minikube-profile skaffold-678946 --kube-context skaffold-678946 --status-check=true --port-forward=false --interactive=false
E0923 13:58:35.918622  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3118769196 run --minikube-profile skaffold-678946 --kube-context skaffold-678946 --status-check=true --port-forward=false --interactive=false: (1m11.295998716s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-9b7cf4cd-lw2kr" [9a398d90-d60a-4c98-8062-34fb69145803] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.005099324s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-86c856dd95-krw9v" [7054c9a3-38fb-459d-9453-a0934ba56067] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006015185s
helpers_test.go:175: Cleaning up "skaffold-678946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-678946
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-678946: (2.911575084s)
--- PASS: TestSkaffold (117.80s)

                                                
                                    
x
+
TestInsufficientStorage (11.92s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-490291 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-490291 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.659309498s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb778b17-8b35-4c63-8674-8c77e99781d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-490291] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"05a918d8-593a-4ea8-bfff-8812f10efa5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"73bd2e69-df59-43f4-a520-9b54d69f8f78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ddd22d18-1c8e-4d63-94bb-df2dca9890c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig"}}
	{"specversion":"1.0","id":"e9f0d825-2a5c-46a2-920a-70616bd935fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube"}}
	{"specversion":"1.0","id":"97d60e1d-5b85-4817-b6d5-eb39f16f8bc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7d554590-74ce-4276-a05a-b3c21cd51971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"362d6752-fa81-4d9a-a53f-ab74783954ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"83cc816a-2670-490f-a912-f61bee092ebe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cb80520d-fc84-405e-b6f9-1d85043be968","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6379f5e6-0503-4e99-9b38-1c37d287173e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c16dd680-7c7b-40ce-b3e8-23f7f6343cf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-490291\" primary control-plane node in \"insufficient-storage-490291\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcb79e81-4868-4977-b1a9-9f68768b608a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"18e6afa7-266e-4442-9d1c-b540b60a3061","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1395e760-7614-46fc-b0e1-5c23c228158a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-490291 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-490291 --output=json --layout=cluster: exit status 7 (274.317616ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-490291","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-490291","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 13:59:39.863427  936199 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-490291" does not appear in /home/jenkins/minikube-integration/19690-714802/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-490291 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-490291 --output=json --layout=cluster: exit status 7 (315.523799ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-490291","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-490291","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 13:59:40.177316  936260 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-490291" does not appear in /home/jenkins/minikube-integration/19690-714802/kubeconfig
	E0923 13:59:40.188917  936260 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/insufficient-storage-490291/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-490291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-490291
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-490291: (1.667022796s)
--- PASS: TestInsufficientStorage (11.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4172874803 start -p running-upgrade-196135 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4172874803 start -p running-upgrade-196135 --memory=2200 --vm-driver=docker  --container-runtime=docker: (42.573477026s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-196135 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0923 14:08:35.917546  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-196135 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.861545075s)
helpers_test.go:175: Cleaning up "running-upgrade-196135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-196135
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-196135: (2.284740225s)
--- PASS: TestRunningBinaryUpgrade (79.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (383.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627249 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-627249 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (57.434814393s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-627249
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-627249: (11.265243797s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-627249 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-627249 status --format={{.Host}}: exit status 7 (110.154199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627249 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0923 14:07:51.158485  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-627249 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m41.624430978s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-627249 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627249 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-627249 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (131.953719ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-627249] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-627249
	    minikube start -p kubernetes-upgrade-627249 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6272492 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-627249 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-627249 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-627249 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.55363459s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-627249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-627249
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-627249: (2.757026407s)
--- PASS: TestKubernetesUpgrade (383.99s)

                                                
                                    
x
+
TestMissingContainerUpgrade (116.51s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.635170666 start -p missing-upgrade-170826 --memory=2200 --driver=docker  --container-runtime=docker
E0923 14:05:37.948090  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.635170666 start -p missing-upgrade-170826 --memory=2200 --driver=docker  --container-runtime=docker: (43.328092911s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-170826
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-170826: (11.85649969s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-170826
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-170826 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0923 14:06:59.870313  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-170826 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (57.734106857s)
helpers_test.go:175: Cleaning up "missing-upgrade-170826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-170826
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-170826: (2.29238553s)
--- PASS: TestMissingContainerUpgrade (116.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-069353 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-069353 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (93.956856ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-069353] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-714802/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-714802/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-069353 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-069353 --driver=docker  --container-runtime=docker: (42.3821461s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-069353 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-069353 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-069353 --no-kubernetes --driver=docker  --container-runtime=docker: (16.792455416s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-069353 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-069353 status -o json: exit status 2 (316.141516ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-069353","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-069353
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-069353: (1.801905119s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-069353 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-069353 --no-kubernetes --driver=docker  --container-runtime=docker: (9.959972266s)
--- PASS: TestNoKubernetes/serial/Start (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-069353 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-069353 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.429003ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-069353
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-069353: (1.267753942s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-069353 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-069353 --driver=docker  --container-runtime=docker: (7.92227531s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-069353 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-069353 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.507384ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.754687779 start -p stopped-upgrade-105656 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0923 14:03:35.917333  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:16.005035  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:16.011973  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:16.023570  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:16.044903  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:16.086293  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:16.173077  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:16.334417  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:16.656115  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:17.297845  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:18.579391  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:21.141129  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:26.263121  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:04:36.504672  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.754687779 start -p stopped-upgrade-105656 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m19.346804581s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.754687779 -p stopped-upgrade-105656 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.754687779 -p stopped-upgrade-105656 stop: (10.98678775s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-105656 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0923 14:04:56.986784  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-105656 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.737634064s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-105656
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-105656: (1.349447486s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                    
x
+
TestPause/serial/Start (75.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-311030 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0923 14:09:16.004751  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:09:43.713137  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-311030 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m15.859766247s)
--- PASS: TestPause/serial/Start (75.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.75s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-311030 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-311030 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.733284293s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.75s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-311030 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-311030 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-311030 --output=json --layout=cluster: exit status 2 (343.223497ms)

                                                
                                                
-- stdout --
	{"Name":"pause-311030","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-311030","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-311030 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.53s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.86s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-311030 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.86s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.18s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-311030 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-311030 --alsologtostderr -v=5: (2.180649066s)
--- PASS: TestPause/serial/DeletePaused (2.18s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-311030
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-311030: exit status 1 (18.589932ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-311030: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (49.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (49.932158127s)
--- PASS: TestNetworkPlugins/group/auto/Start (49.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-094393 "pgrep -a kubelet"
I0923 14:11:20.537006  720192 config.go:182] Loaded profile config "auto-094393": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-094393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kssnf" [e35e9f2c-d418-496a-9952-4625c87b1e86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kssnf" [e35e9f2c-d418-496a-9952-4625c87b1e86] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003855213s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-094393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m22.60525961s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0923 14:12:51.157796  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m30.022761235s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-kmbcj" [88e26e3b-3671-4772-b971-6bcfe87a9d8b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007460525s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-094393 "pgrep -a kubelet"
I0923 14:13:22.349229  720192 config.go:182] Loaded profile config "kindnet-094393": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-094393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zns7m" [f98650e4-6061-44a1-992a-2fedfa9280dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zns7m" [f98650e4-6061-44a1-992a-2fedfa9280dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005338157s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-094393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m0.563226665s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hb9br" [f92c1691-6cc1-4026-af94-b3dbfac395f6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005136025s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-094393 "pgrep -a kubelet"
I0923 14:14:15.213183  720192 config.go:182] Loaded profile config "calico-094393": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-094393 replace --force -f testdata/netcat-deployment.yaml
I0923 14:14:15.528339  720192 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zswzh" [57737400-b003-43d2-a2c4-0390dbc84d34] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 14:14:16.004995  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-zswzh" [57737400-b003-43d2-a2c4-0390dbc84d34] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.00711087s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-094393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (83.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m23.528827156s)
--- PASS: TestNetworkPlugins/group/false/Start (83.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-094393 "pgrep -a kubelet"
I0923 14:14:59.397344  720192 config.go:182] Loaded profile config "custom-flannel-094393": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-094393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t9cdv" [5145a109-067f-4485-9e41-a54fc6bcae7b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t9cdv" [5145a109-067f-4485-9e41-a54fc6bcae7b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004711901s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-094393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (47.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (47.277523984s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (47.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-094393 "pgrep -a kubelet"
I0923 14:16:20.779349  720192 config.go:182] Loaded profile config "false-094393": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-094393 replace --force -f testdata/netcat-deployment.yaml
E0923 14:16:20.793733  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:16:20.800153  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:16:20.814832  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:16:20.836243  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:16:20.877565  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:16:20.958798  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nkwt5" [0be0d20f-537f-4c78-8dbc-6fad196ca26d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 14:16:21.120614  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:16:21.442296  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:16:22.084813  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:16:23.366906  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nkwt5" [0be0d20f-537f-4c78-8dbc-6fad196ca26d] Running
E0923 14:16:25.929449  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003851792s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-094393 "pgrep -a kubelet"
I0923 14:16:26.820646  720192 config.go:182] Loaded profile config "enable-default-cni-094393": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-094393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nqxhq" [b4e90430-525b-4eb6-a649-8ce9db9aacd0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 14:16:31.051316  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nqxhq" [b4e90430-525b-4eb6-a649-8ce9db9aacd0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00452227s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-094393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-094393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (69.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m9.041418767s)
--- PASS: TestNetworkPlugins/group/flannel/Start (69.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0923 14:17:42.736499  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:17:51.158738  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m26.39313556s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-x2tll" [3a458c40-0f7a-4acf-aff8-1efde5a63b2d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004845423s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-094393 "pgrep -a kubelet"
I0923 14:18:10.809777  720192 config.go:182] Loaded profile config "flannel-094393": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-094393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6ls6m" [59d5aed6-7ed9-4509-8488-b40bdf7c2025] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 14:18:15.916341  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:15.922796  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:15.934227  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:15.955691  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:15.997120  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-6ls6m" [59d5aed6-7ed9-4509-8488-b40bdf7c2025] Running
E0923 14:18:16.079161  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:16.240832  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:16.562561  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:17.204774  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:18.486837  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:18.991864  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:21.048929  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00489562s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-094393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-094393 "pgrep -a kubelet"
I0923 14:18:30.552929  720192 config.go:182] Loaded profile config "bridge-094393": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-094393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-785xl" [e1301869-4ce2-4478-863b-2abbd7274a5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 14:18:35.917386  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:18:36.412436  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-785xl" [e1301869-4ce2-4478-863b-2abbd7274a5f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00513225s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-094393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (53.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0923 14:18:56.894336  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:04.658009  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-094393 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (53.850584917s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (53.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-923826 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0923 14:19:08.798182  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:08.804481  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:08.816507  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:08.837968  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:08.879338  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:08.961116  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:09.122614  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:09.444107  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:10.086088  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:11.367678  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:13.929014  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:16.004848  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:19.050871  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:29.292930  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:19:37.855820  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-923826 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m33.612470037s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-094393 "pgrep -a kubelet"
I0923 14:19:40.508224  720192 config.go:182] Loaded profile config "kubenet-094393": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-094393 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sclmh" [fcdda6bd-73cf-4509-b3a5-4b41aae6ffc9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sclmh" [fcdda6bd-73cf-4509-b3a5-4b41aae6ffc9] Running
E0923 14:19:49.774571  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.005422953s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-094393 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-094393 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.26s)
E0923 14:31:20.791686  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:21.035597  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:27.097693  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:40.461590  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:40.468068  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:40.479609  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:40.501198  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:40.542704  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:40.624198  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:40.785850  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:41.107325  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:41.582980  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:41.749577  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:43.031301  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:45.593227  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:31:50.714871  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:32:00.956676  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:32:09.286578  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:32:21.438812  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/no-preload-187110/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:32:43.861097  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (85.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-187110 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 14:20:20.232691  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/custom-flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:20:30.735837  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:20:39.075393  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:20:40.714631  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/custom-flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:20:59.778156  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:20.791478  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:21.035455  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:21.041888  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:21.053337  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:21.074818  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:21.116293  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:21.197930  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:21.359410  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:21.676092  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/custom-flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:21.681583  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:22.323801  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:23.605433  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:26.167029  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:27.098282  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:27.104679  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:27.116183  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:27.137710  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:27.179130  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:27.260596  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:27.422153  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:27.743442  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:28.385727  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:29.667084  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:31.289431  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:32.229373  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:21:37.351487  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-187110 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m25.09393829s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (85.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-187110 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [17debb9a-c020-4a94-842d-4da9b765f399] Pending
helpers_test.go:344: "busybox" [17debb9a-c020-4a94-842d-4da9b765f399] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0923 14:21:41.530905  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [17debb9a-c020-4a94-842d-4da9b765f399] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00365327s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-187110 exec busybox -- /bin/sh -c "ulimit -n"
E0923 14:21:48.499728  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-923826 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [65c57b4a-0e01-4825-a253-e05f6b4e0111] Pending
helpers_test.go:344: "busybox" [65c57b4a-0e01-4825-a253-e05f6b4e0111] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [65c57b4a-0e01-4825-a253-e05f6b4e0111] Running
E0923 14:21:47.593776  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.004073982s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-923826 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-187110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-187110 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.068823469s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-187110 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-187110 --alsologtostderr -v=3
E0923 14:21:52.657630  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-187110 --alsologtostderr -v=3: (11.155236796s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-923826 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-923826 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-923826 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-923826 --alsologtostderr -v=3: (11.073987386s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-187110 -n no-preload-187110
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-187110 -n no-preload-187110: exit status 7 (71.961809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-187110 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (292.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-187110 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 14:22:02.012284  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-187110 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m52.446335677s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-187110 -n no-preload-187110
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (292.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-923826 -n old-k8s-version-923826
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-923826 -n old-k8s-version-923826: exit status 7 (110.613958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-923826 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (32.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-923826 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0923 14:22:08.076182  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:34.227912  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-923826 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (31.579280238s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-923826 -n old-k8s-version-923826
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (32.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (30.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0923 14:22:42.974342  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:43.597576  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/custom-flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:49.037801  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:22:51.158396  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-6twft" [1189ec88-4932-4aba-b60a-2ad82c92f97d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-6twft" [1189ec88-4932-4aba-b60a-2ad82c92f97d] Running
E0923 14:23:04.501585  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:04.507952  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:04.519471  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:04.540977  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:04.582423  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:04.663934  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:04.825452  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:05.147147  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:05.789182  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:07.070784  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 30.005339912s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (30.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-6twft" [1189ec88-4932-4aba-b60a-2ad82c92f97d] Running
E0923 14:23:09.632759  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004358666s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-923826 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-923826 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-923826 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-923826 -n old-k8s-version-923826
E0923 14:23:14.754100  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-923826 -n old-k8s-version-923826: exit status 2 (365.503676ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-923826 -n old-k8s-version-923826
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-923826 -n old-k8s-version-923826: exit status 2 (370.295243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-923826 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-923826 -n old-k8s-version-923826
E0923 14:23:15.915822  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-923826 -n old-k8s-version-923826
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-202525 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 14:23:24.996442  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:30.909929  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:30.916259  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:30.927583  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:30.949442  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:30.991395  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:31.073489  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:31.235367  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:31.557478  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:32.198764  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:33.480649  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:35.918116  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:36.042422  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:41.164326  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:43.620014  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:45.478381  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:23:51.405693  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:04.896637  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:08.798382  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:10.959384  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:11.887260  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:16.005579  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:26.440183  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:36.499338  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-202525 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m18.047349238s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-202525 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c1146477-1a6e-4505-a6aa-98caab1e9395] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0923 14:24:40.856307  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:40.862794  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:40.874162  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:40.895653  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:40.937207  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:41.018665  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:41.180563  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:41.502616  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [c1146477-1a6e-4505-a6aa-98caab1e9395] Running
E0923 14:24:42.144148  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:43.426170  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:45.988447  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003752654s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-202525 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-202525 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-202525 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-202525 --alsologtostderr -v=3
E0923 14:24:51.110526  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:24:52.848804  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-202525 --alsologtostderr -v=3: (10.828214363s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-202525 -n embed-certs-202525
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-202525 -n embed-certs-202525: exit status 7 (76.20169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-202525 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (265.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-202525 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 14:24:59.732334  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/custom-flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:25:01.352315  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:25:21.833871  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:25:27.439707  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/custom-flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:25:48.362563  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:02.795191  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:14.770570  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:20.792461  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/auto-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:21.035100  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:27.098219  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:41.583285  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:41.589762  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:41.601418  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:41.622858  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:41.664255  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:41.745766  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:41.907769  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:42.229488  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:42.871944  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:44.153777  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:46.715177  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:48.738160  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/false-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:26:51.837119  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-202525 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m25.444747283s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-202525 -n embed-certs-202525
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (265.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5hgmp" [c95ff3f0-3fac-461d-94ba-bae2841fdf86] Running
E0923 14:26:54.801585  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/enable-default-cni-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003442671s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5hgmp" [c95ff3f0-3fac-461d-94ba-bae2841fdf86] Running
E0923 14:27:02.078594  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004488048s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-187110 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-187110 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-187110 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-187110 -n no-preload-187110
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-187110 -n no-preload-187110: exit status 2 (339.864358ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-187110 -n no-preload-187110
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-187110 -n no-preload-187110: exit status 2 (347.019412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-187110 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-187110 -n no-preload-187110
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-187110 -n no-preload-187110
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-000738 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 14:27:22.560630  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:27:24.716876  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:27:51.158383  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-000738 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (45.837259592s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-000738 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2856a2c7-7e58-4b77-b39a-5e038089514f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2856a2c7-7e58-4b77-b39a-5e038089514f] Running
E0923 14:28:03.522915  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:28:04.501269  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004025222s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-000738 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-000738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-000738 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-000738 --alsologtostderr -v=3
E0923 14:28:15.915997  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kindnet-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-000738 --alsologtostderr -v=3: (10.910884345s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-000738 -n default-k8s-diff-port-000738
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-000738 -n default-k8s-diff-port-000738: exit status 7 (71.798916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-000738 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-000738 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 14:28:30.910083  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:28:32.204610  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:28:35.917237  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/addons-816293/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:28:58.612552  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/bridge-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:08.798597  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/calico-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:29:16.005185  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/skaffold-678946/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-000738 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.022577683s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-000738 -n default-k8s-diff-port-000738
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hhnlz" [0525cfc3-b62b-4028-906b-42519944efce] Running
E0923 14:29:25.444442  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/old-k8s-version-923826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003460142s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hhnlz" [0525cfc3-b62b-4028-906b-42519944efce] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003714299s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-202525 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-202525 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-202525 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-202525 -n embed-certs-202525
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-202525 -n embed-certs-202525: exit status 2 (344.883422ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-202525 -n embed-certs-202525
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-202525 -n embed-certs-202525: exit status 2 (337.275315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-202525 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-202525 -n embed-certs-202525
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-202525 -n embed-certs-202525
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-482215 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0923 14:29:59.732325  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/custom-flannel-094393/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:30:08.559059  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/kubenet-094393/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-482215 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (39.420757442s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-482215 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-482215 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.137340614s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-482215 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-482215 --alsologtostderr -v=3: (9.106149901s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-482215 -n newest-cni-482215
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-482215 -n newest-cni-482215: exit status 7 (75.207939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-482215 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-482215 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-482215 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (19.334094429s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-482215 -n newest-cni-482215
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-482215 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-482215 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-482215 -n newest-cni-482215
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-482215 -n newest-cni-482215: exit status 2 (304.28234ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-482215 -n newest-cni-482215
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-482215 -n newest-cni-482215: exit status 2 (328.904037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-482215 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-482215 -n newest-cni-482215
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-482215 -n newest-cni-482215
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ltkkj" [a17f3ef7-50c3-4ce3-a7fb-4d86e104e9f2] Running
E0923 14:32:51.157875  720192 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/functional-863481/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003337434s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ltkkj" [a17f3ef7-50c3-4ce3-a7fb-4d86e104e9f2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004170491s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-000738 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-000738 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-000738 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-000738 -n default-k8s-diff-port-000738
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-000738 -n default-k8s-diff-port-000738: exit status 2 (308.970326ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-000738 -n default-k8s-diff-port-000738
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-000738 -n default-k8s-diff-port-000738: exit status 2 (313.612289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-000738 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-000738 -n default-k8s-diff-port-000738
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-000738 -n default-k8s-diff-port-000738
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.10s)

                                                
                                    

Test skip (23/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-126922 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-126922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-126922
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-094393 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-094393" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19690-714802/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 23 Sep 2024 14:00:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-537186
contexts:
- context:
cluster: offline-docker-537186
extensions:
- extension:
last-update: Mon, 23 Sep 2024 14:00:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: offline-docker-537186
name: offline-docker-537186
current-context: offline-docker-537186
kind: Config
preferences: {}
users:
- name: offline-docker-537186
user:
client-certificate: /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/offline-docker-537186/client.crt
client-key: /home/jenkins/minikube-integration/19690-714802/.minikube/profiles/offline-docker-537186/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-094393

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-094393" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-094393"

                                                
                                                
----------------------- debugLogs end: cilium-094393 [took: 3.839453214s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-094393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-094393
--- SKIP: TestNetworkPlugins/group/cilium (4.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-109511" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-109511
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard