Test Report: Docker_Linux_docker_arm64 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36297
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 74.47
x
+
TestAddons/parallel/Registry (74.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.755914ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-lmt9d" [2a3a6aaa-b147-4517-bdc2-529c58ed2d26] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003282306s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2wp2r" [c8ba5e64-c35c-4fdb-8dfb-ede028619b44] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004253868s
addons_test.go:338: (dbg) Run:  kubectl --context addons-877987 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-877987 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-877987 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.150173785s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-877987 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 ip
2024/09/20 16:57:32 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-877987
helpers_test.go:235: (dbg) docker inspect addons-877987:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d95947c17606bacd3e570a9182c1de827df0aa9fe042efdee7f33d038aec9da4",
	        "Created": "2024-09-20T16:44:18.821839744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8822,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T16:44:18.997782345Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/d95947c17606bacd3e570a9182c1de827df0aa9fe042efdee7f33d038aec9da4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d95947c17606bacd3e570a9182c1de827df0aa9fe042efdee7f33d038aec9da4/hostname",
	        "HostsPath": "/var/lib/docker/containers/d95947c17606bacd3e570a9182c1de827df0aa9fe042efdee7f33d038aec9da4/hosts",
	        "LogPath": "/var/lib/docker/containers/d95947c17606bacd3e570a9182c1de827df0aa9fe042efdee7f33d038aec9da4/d95947c17606bacd3e570a9182c1de827df0aa9fe042efdee7f33d038aec9da4-json.log",
	        "Name": "/addons-877987",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-877987:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-877987",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b16cc1c7dd5c336a505a076203b74609a051734b5b94fb24e814acb100192f61-init/diff:/var/lib/docker/overlay2/fab76bcb726d0967c4800d6a9255781ccd228428269d4d62cbf53d43201c9aa2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b16cc1c7dd5c336a505a076203b74609a051734b5b94fb24e814acb100192f61/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b16cc1c7dd5c336a505a076203b74609a051734b5b94fb24e814acb100192f61/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b16cc1c7dd5c336a505a076203b74609a051734b5b94fb24e814acb100192f61/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-877987",
	                "Source": "/var/lib/docker/volumes/addons-877987/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-877987",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-877987",
	                "name.minikube.sigs.k8s.io": "addons-877987",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1167d286f8d7d2f3677e33ba4c26630abd10800af77ecbd4046ce43bf4d768d7",
	            "SandboxKey": "/var/run/docker/netns/1167d286f8d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-877987": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "33c3cfe9a990b66d6cf2c66c50c72a77a1b8b28d83efd033fb06040528137544",
	                    "EndpointID": "58759938cdc4a1718c0bfc0494fe277897bfe0ba90204b49befc1c8c67342641",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-877987",
	                        "d95947c17606"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-877987 -n addons-877987
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-877987 logs -n 25: (1.159785398s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-923497   | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p download-only-923497              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| delete  | -p download-only-923497              | download-only-923497   | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| start   | -o=json --download-only              | download-only-777196   | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p download-only-777196              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| delete  | -p download-only-777196              | download-only-777196   | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| delete  | -p download-only-923497              | download-only-923497   | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| delete  | -p download-only-777196              | download-only-777196   | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| start   | --download-only -p                   | download-docker-108524 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | download-docker-108524               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-108524            | download-docker-108524 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| start   | --download-only -p                   | binary-mirror-528781   | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | binary-mirror-528781                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37459               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-528781              | binary-mirror-528781   | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| addons  | enable dashboard -p                  | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | addons-877987                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | addons-877987                        |                        |         |         |                     |                     |
	| start   | -p addons-877987 --wait=true         | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:47 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-877987 addons disable         | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:48 UTC | 20 Sep 24 16:48 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | -p addons-877987                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-877987 addons disable         | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-877987 addons                 | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-877987 addons                 | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-877987 addons                 | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC |                     |
	|         | addons-877987                        |                        |         |         |                     |                     |
	| ip      | addons-877987 ip                     | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	| addons  | addons-877987 addons disable         | addons-877987          | jenkins | v1.34.0 | 20 Sep 24 16:57 UTC | 20 Sep 24 16:57 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:43:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:43:54.520087    8307 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:43:54.520275    8307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:54.520287    8307 out.go:358] Setting ErrFile to fd 2...
	I0920 16:43:54.520294    8307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:54.520585    8307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
	I0920 16:43:54.521076    8307 out.go:352] Setting JSON to false
	I0920 16:43:54.521883    8307 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1586,"bootTime":1726849049,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 16:43:54.521954    8307 start.go:139] virtualization:  
	I0920 16:43:54.524687    8307 out.go:177] * [addons-877987] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 16:43:54.527977    8307 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 16:43:54.528137    8307 notify.go:220] Checking for updates...
	I0920 16:43:54.532790    8307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:43:54.535561    8307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig
	I0920 16:43:54.538001    8307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube
	I0920 16:43:54.540081    8307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 16:43:54.542350    8307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 16:43:54.545136    8307 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:43:54.570396    8307 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 16:43:54.570511    8307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:43:54.625536    8307 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 16:43:54.615721842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 16:43:54.625650    8307 docker.go:318] overlay module found
	I0920 16:43:54.627739    8307 out.go:177] * Using the docker driver based on user configuration
	I0920 16:43:54.629899    8307 start.go:297] selected driver: docker
	I0920 16:43:54.629930    8307 start.go:901] validating driver "docker" against <nil>
	I0920 16:43:54.629944    8307 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 16:43:54.630697    8307 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:43:54.678980    8307 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 16:43:54.669597018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 16:43:54.679177    8307 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:43:54.679392    8307 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:43:54.681875    8307 out.go:177] * Using Docker driver with root privileges
	I0920 16:43:54.683841    8307 cni.go:84] Creating CNI manager for ""
	I0920 16:43:54.683902    8307 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 16:43:54.683925    8307 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 16:43:54.684008    8307 start.go:340] cluster config:
	{Name:addons-877987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-877987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:43:54.686474    8307 out.go:177] * Starting "addons-877987" primary control-plane node in "addons-877987" cluster
	I0920 16:43:54.688549    8307 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 16:43:54.690653    8307 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0920 16:43:54.692800    8307 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 16:43:54.692845    8307 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-2235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 16:43:54.692858    8307 cache.go:56] Caching tarball of preloaded images
	I0920 16:43:54.692857    8307 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 16:43:54.692936    8307 preload.go:172] Found /home/jenkins/minikube-integration/19672-2235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 16:43:54.692946    8307 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 16:43:54.693282    8307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/config.json ...
	I0920 16:43:54.693310    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/config.json: {Name:mkf2e3ecc51cae16a5656830a7678f4e5142cf00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:43:54.707977    8307 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 16:43:54.708072    8307 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 16:43:54.708090    8307 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 16:43:54.708095    8307 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 16:43:54.708102    8307 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 16:43:54.708107    8307 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0920 16:44:12.104421    8307 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0920 16:44:12.104468    8307 cache.go:194] Successfully downloaded all kic artifacts
	I0920 16:44:12.104498    8307 start.go:360] acquireMachinesLock for addons-877987: {Name:mk221d5c4555b6842e86454c467ee0d2d76e805a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 16:44:12.104621    8307 start.go:364] duration metric: took 99.514µs to acquireMachinesLock for "addons-877987"
	I0920 16:44:12.104651    8307 start.go:93] Provisioning new machine with config: &{Name:addons-877987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-877987 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 16:44:12.104734    8307 start.go:125] createHost starting for "" (driver="docker")
	I0920 16:44:12.107964    8307 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 16:44:12.108252    8307 start.go:159] libmachine.API.Create for "addons-877987" (driver="docker")
	I0920 16:44:12.108294    8307 client.go:168] LocalClient.Create starting
	I0920 16:44:12.108448    8307 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-2235/.minikube/certs/ca.pem
	I0920 16:44:12.391738    8307 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-2235/.minikube/certs/cert.pem
	I0920 16:44:12.662504    8307 cli_runner.go:164] Run: docker network inspect addons-877987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 16:44:12.678013    8307 cli_runner.go:211] docker network inspect addons-877987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 16:44:12.678098    8307 network_create.go:284] running [docker network inspect addons-877987] to gather additional debugging logs...
	I0920 16:44:12.678129    8307 cli_runner.go:164] Run: docker network inspect addons-877987
	W0920 16:44:12.692725    8307 cli_runner.go:211] docker network inspect addons-877987 returned with exit code 1
	I0920 16:44:12.692759    8307 network_create.go:287] error running [docker network inspect addons-877987]: docker network inspect addons-877987: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-877987 not found
	I0920 16:44:12.692772    8307 network_create.go:289] output of [docker network inspect addons-877987]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-877987 not found
	
	** /stderr **
	I0920 16:44:12.692871    8307 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 16:44:12.708657    8307 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017f00e0}
	I0920 16:44:12.708708    8307 network_create.go:124] attempt to create docker network addons-877987 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 16:44:12.708810    8307 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-877987 addons-877987
	I0920 16:44:12.778583    8307 network_create.go:108] docker network addons-877987 192.168.49.0/24 created
	I0920 16:44:12.778619    8307 kic.go:121] calculated static IP "192.168.49.2" for the "addons-877987" container
	I0920 16:44:12.778704    8307 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 16:44:12.793536    8307 cli_runner.go:164] Run: docker volume create addons-877987 --label name.minikube.sigs.k8s.io=addons-877987 --label created_by.minikube.sigs.k8s.io=true
	I0920 16:44:12.810527    8307 oci.go:103] Successfully created a docker volume addons-877987
	I0920 16:44:12.810622    8307 cli_runner.go:164] Run: docker run --rm --name addons-877987-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-877987 --entrypoint /usr/bin/test -v addons-877987:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0920 16:44:15.018703    8307 cli_runner.go:217] Completed: docker run --rm --name addons-877987-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-877987 --entrypoint /usr/bin/test -v addons-877987:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.20803358s)
	I0920 16:44:15.018732    8307 oci.go:107] Successfully prepared a docker volume addons-877987
	I0920 16:44:15.018753    8307 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 16:44:15.018772    8307 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 16:44:15.018839    8307 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-2235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-877987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 16:44:18.755755    8307 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-2235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-877987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.736874567s)
	I0920 16:44:18.755787    8307 kic.go:203] duration metric: took 3.737011924s to extract preloaded images to volume ...
	W0920 16:44:18.755951    8307 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 16:44:18.756069    8307 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 16:44:18.807666    8307 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-877987 --name addons-877987 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-877987 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-877987 --network addons-877987 --ip 192.168.49.2 --volume addons-877987:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0920 16:44:19.159816    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Running}}
	I0920 16:44:19.187040    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:19.208953    8307 cli_runner.go:164] Run: docker exec addons-877987 stat /var/lib/dpkg/alternatives/iptables
	I0920 16:44:19.275920    8307 oci.go:144] the created container "addons-877987" has a running status.
	I0920 16:44:19.275952    8307 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa...
	I0920 16:44:19.520474    8307 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 16:44:19.548325    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:19.569407    8307 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 16:44:19.569426    8307 kic_runner.go:114] Args: [docker exec --privileged addons-877987 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 16:44:19.656723    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:19.678500    8307 machine.go:93] provisionDockerMachine start ...
	I0920 16:44:19.678592    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:19.703704    8307 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:19.703990    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:19.704007    8307 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 16:44:19.704687    8307 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55594->127.0.0.1:32768: read: connection reset by peer
	I0920 16:44:22.837525    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-877987
	
	I0920 16:44:22.837592    8307 ubuntu.go:169] provisioning hostname "addons-877987"
	I0920 16:44:22.837688    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:22.853963    8307 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:22.854212    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:22.854224    8307 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-877987 && echo "addons-877987" | sudo tee /etc/hostname
	I0920 16:44:22.997849    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-877987
	
	I0920 16:44:22.997936    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:23.014498    8307 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:23.014745    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:23.014768    8307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-877987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-877987/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-877987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 16:44:23.146103    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 16:44:23.146129    8307 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19672-2235/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-2235/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-2235/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-2235/.minikube}
	I0920 16:44:23.146151    8307 ubuntu.go:177] setting up certificates
	I0920 16:44:23.146160    8307 provision.go:84] configureAuth start
	I0920 16:44:23.146223    8307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-877987
	I0920 16:44:23.163229    8307 provision.go:143] copyHostCerts
	I0920 16:44:23.163305    8307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-2235/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-2235/.minikube/ca.pem (1082 bytes)
	I0920 16:44:23.163418    8307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-2235/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-2235/.minikube/cert.pem (1123 bytes)
	I0920 16:44:23.163470    8307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-2235/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-2235/.minikube/key.pem (1679 bytes)
	I0920 16:44:23.163514    8307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-2235/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-2235/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-2235/.minikube/certs/ca-key.pem org=jenkins.addons-877987 san=[127.0.0.1 192.168.49.2 addons-877987 localhost minikube]
	I0920 16:44:23.672518    8307 provision.go:177] copyRemoteCerts
	I0920 16:44:23.672590    8307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 16:44:23.672636    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:23.689850    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:23.782956    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0920 16:44:23.807505    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 16:44:23.831141    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 16:44:23.854940    8307 provision.go:87] duration metric: took 708.766629ms to configureAuth
	I0920 16:44:23.855000    8307 ubuntu.go:193] setting minikube options for container-runtime
	I0920 16:44:23.855197    8307 config.go:182] Loaded profile config "addons-877987": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:44:23.855294    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:23.872194    8307 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:23.872450    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:23.872468    8307 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 16:44:24.006471    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0920 16:44:24.006490    8307 ubuntu.go:71] root file system type: overlay
	I0920 16:44:24.006595    8307 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 16:44:24.006666    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:24.033655    8307 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:24.033922    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:24.034016    8307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 16:44:24.178479    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 16:44:24.178568    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:24.196231    8307 main.go:141] libmachine: Using SSH client type: native
	I0920 16:44:24.196470    8307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 16:44:24.196502    8307 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 16:44:24.959646    8307 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:16.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-20 16:44:24.170368569 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0920 16:44:24.959745    8307 machine.go:96] duration metric: took 5.281220375s to provisionDockerMachine
	I0920 16:44:24.959800    8307 client.go:171] duration metric: took 12.851494838s to LocalClient.Create
	I0920 16:44:24.959843    8307 start.go:167] duration metric: took 12.851591578s to libmachine.API.Create "addons-877987"
	I0920 16:44:24.959867    8307 start.go:293] postStartSetup for "addons-877987" (driver="docker")
	I0920 16:44:24.959912    8307 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 16:44:24.960000    8307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 16:44:24.960069    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:24.979053    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:25.075552    8307 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 16:44:25.078673    8307 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 16:44:25.078710    8307 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 16:44:25.078722    8307 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 16:44:25.078729    8307 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 16:44:25.078740    8307 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-2235/.minikube/addons for local assets ...
	I0920 16:44:25.078814    8307 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-2235/.minikube/files for local assets ...
	I0920 16:44:25.078838    8307 start.go:296] duration metric: took 118.931939ms for postStartSetup
	I0920 16:44:25.079152    8307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-877987
	I0920 16:44:25.096442    8307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/config.json ...
	I0920 16:44:25.096735    8307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 16:44:25.096787    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:25.115105    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:25.207105    8307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 16:44:25.211624    8307 start.go:128] duration metric: took 13.106874758s to createHost
	I0920 16:44:25.211648    8307 start.go:83] releasing machines lock for "addons-877987", held for 13.107013493s
	I0920 16:44:25.211738    8307 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-877987
	I0920 16:44:25.227312    8307 ssh_runner.go:195] Run: cat /version.json
	I0920 16:44:25.227366    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:25.227598    8307 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 16:44:25.227667    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:25.247976    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:25.250523    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:25.468285    8307 ssh_runner.go:195] Run: systemctl --version
	I0920 16:44:25.472716    8307 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 16:44:25.477867    8307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 16:44:25.503533    8307 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 16:44:25.503612    8307 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 16:44:25.533791    8307 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 16:44:25.533865    8307 start.go:495] detecting cgroup driver to use...
	I0920 16:44:25.533911    8307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 16:44:25.534053    8307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 16:44:25.550171    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 16:44:25.559861    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 16:44:25.569780    8307 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 16:44:25.569893    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 16:44:25.580226    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 16:44:25.590111    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 16:44:25.600464    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 16:44:25.610295    8307 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 16:44:25.619974    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 16:44:25.630047    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 16:44:25.639743    8307 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 16:44:25.649550    8307 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 16:44:25.658262    8307 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 16:44:25.658402    8307 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 16:44:25.673675    8307 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 16:44:25.683512    8307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:25.770052    8307 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 16:44:25.870831    8307 start.go:495] detecting cgroup driver to use...
	I0920 16:44:25.870931    8307 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 16:44:25.871029    8307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 16:44:25.884767    8307 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0920 16:44:25.884876    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 16:44:25.898913    8307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 16:44:25.919706    8307 ssh_runner.go:195] Run: which cri-dockerd
	I0920 16:44:25.923983    8307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 16:44:25.937745    8307 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 16:44:25.958469    8307 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 16:44:26.063067    8307 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 16:44:26.171944    8307 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 16:44:26.172110    8307 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 16:44:26.192870    8307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:26.284487    8307 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 16:44:26.551982    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 16:44:26.564432    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 16:44:26.577900    8307 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 16:44:26.670746    8307 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 16:44:26.765560    8307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:26.846464    8307 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 16:44:26.860334    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 16:44:26.871273    8307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:26.959281    8307 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 16:44:27.036759    8307 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 16:44:27.036906    8307 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 16:44:27.041400    8307 start.go:563] Will wait 60s for crictl version
	I0920 16:44:27.041545    8307 ssh_runner.go:195] Run: which crictl
	I0920 16:44:27.047153    8307 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 16:44:27.090884    8307 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 16:44:27.091022    8307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 16:44:27.113743    8307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 16:44:27.138966    8307 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 16:44:27.139093    8307 cli_runner.go:164] Run: docker network inspect addons-877987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 16:44:27.154810    8307 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 16:44:27.158520    8307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 16:44:27.169854    8307 kubeadm.go:883] updating cluster {Name:addons-877987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-877987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 16:44:27.169970    8307 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 16:44:27.170024    8307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 16:44:27.188920    8307 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 16:44:27.188946    8307 docker.go:615] Images already preloaded, skipping extraction
	I0920 16:44:27.189011    8307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 16:44:27.207090    8307 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 16:44:27.207114    8307 cache_images.go:84] Images are preloaded, skipping loading
	I0920 16:44:27.207125    8307 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0920 16:44:27.207266    8307 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-877987 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-877987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 16:44:27.207372    8307 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 16:44:27.249268    8307 cni.go:84] Creating CNI manager for ""
	I0920 16:44:27.249308    8307 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 16:44:27.249320    8307 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 16:44:27.249341    8307 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-877987 NodeName:addons-877987 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 16:44:27.249506    8307 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-877987"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 16:44:27.249597    8307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 16:44:27.258639    8307 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 16:44:27.258712    8307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 16:44:27.267363    8307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 16:44:27.286346    8307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 16:44:27.304368    8307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0920 16:44:27.322250    8307 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 16:44:27.325692    8307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 16:44:27.336217    8307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:27.421986    8307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 16:44:27.442966    8307 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987 for IP: 192.168.49.2
	I0920 16:44:27.442986    8307 certs.go:194] generating shared ca certs ...
	I0920 16:44:27.443001    8307 certs.go:226] acquiring lock for ca certs: {Name:mk539b11c006d047f7d221e4c2dcf26c06d5e779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:27.443123    8307 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-2235/.minikube/ca.key
	I0920 16:44:27.826406    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-2235/.minikube/ca.crt ...
	I0920 16:44:27.826440    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/ca.crt: {Name:mk5af13a70dac5b7e434eaf057dfe487d146bf5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:27.827044    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-2235/.minikube/ca.key ...
	I0920 16:44:27.827097    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/ca.key: {Name:mkc6071658ed474787d12f09fb370c2d8c3a8e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:27.827229    8307 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-2235/.minikube/proxy-client-ca.key
	I0920 16:44:28.376351    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-2235/.minikube/proxy-client-ca.crt ...
	I0920 16:44:28.376384    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/proxy-client-ca.crt: {Name:mkb80da8e44642552765b0422cd2ea11ccfb23b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:28.376607    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-2235/.minikube/proxy-client-ca.key ...
	I0920 16:44:28.376622    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/proxy-client-ca.key: {Name:mke01337e9b6efbb5075844049fc7a26db49de6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:28.376708    8307 certs.go:256] generating profile certs ...
	I0920 16:44:28.376770    8307 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.key
	I0920 16:44:28.376796    8307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt with IP's: []
	I0920 16:44:29.534696    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt ...
	I0920 16:44:29.534730    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: {Name:mk810d0e54dde39af38c3cd8fb6a8ae5e9408977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:29.534925    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.key ...
	I0920 16:44:29.534941    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.key: {Name:mk89fab8c443f6b170b725ec9cd45b281b4c7e43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:29.535027    8307 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.key.c6544f1a
	I0920 16:44:29.535046    8307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.crt.c6544f1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 16:44:29.930141    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.crt.c6544f1a ...
	I0920 16:44:29.930175    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.crt.c6544f1a: {Name:mk3b3b68436f509194f43d38146ae13f1765c13b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:29.930396    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.key.c6544f1a ...
	I0920 16:44:29.930413    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.key.c6544f1a: {Name:mk7fbc7421b690435882ccc40bd022e33069a6e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:29.930501    8307 certs.go:381] copying /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.crt.c6544f1a -> /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.crt
	I0920 16:44:29.930581    8307 certs.go:385] copying /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.key.c6544f1a -> /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.key
	I0920 16:44:29.930635    8307 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/proxy-client.key
	I0920 16:44:29.930655    8307 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/proxy-client.crt with IP's: []
	I0920 16:44:30.260423    8307 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/proxy-client.crt ...
	I0920 16:44:30.260460    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/proxy-client.crt: {Name:mkd614d149f84bdbb7bd52fc02a5a988dcfbe503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:30.260691    8307 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/proxy-client.key ...
	I0920 16:44:30.260707    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/proxy-client.key: {Name:mk563c190898ad10a3e8202468da5e54def6a022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:30.260916    8307 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-2235/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 16:44:30.260960    8307 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-2235/.minikube/certs/ca.pem (1082 bytes)
	I0920 16:44:30.260989    8307 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-2235/.minikube/certs/cert.pem (1123 bytes)
	I0920 16:44:30.261017    8307 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-2235/.minikube/certs/key.pem (1679 bytes)
	I0920 16:44:30.261626    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 16:44:30.288919    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 16:44:30.314297    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 16:44:30.338941    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 16:44:30.364350    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 16:44:30.392057    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 16:44:30.422137    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 16:44:30.450623    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 16:44:30.474517    8307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-2235/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 16:44:30.498533    8307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 16:44:30.516058    8307 ssh_runner.go:195] Run: openssl version
	I0920 16:44:30.521380    8307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 16:44:30.530887    8307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:30.534516    8307 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:30.534587    8307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:30.541519    8307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 16:44:30.550612    8307 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 16:44:30.553737    8307 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 16:44:30.553784    8307 kubeadm.go:392] StartCluster: {Name:addons-877987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-877987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:30.553912    8307 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 16:44:30.569013    8307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 16:44:30.577535    8307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 16:44:30.586433    8307 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 16:44:30.586500    8307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 16:44:30.595162    8307 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 16:44:30.595185    8307 kubeadm.go:157] found existing configuration files:
	
	I0920 16:44:30.595240    8307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 16:44:30.604356    8307 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 16:44:30.604433    8307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 16:44:30.613105    8307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 16:44:30.622052    8307 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 16:44:30.622119    8307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 16:44:30.630763    8307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 16:44:30.639510    8307 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 16:44:30.639604    8307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 16:44:30.648285    8307 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 16:44:30.656900    8307 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 16:44:30.656966    8307 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 16:44:30.665336    8307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 16:44:30.711423    8307 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 16:44:30.711737    8307 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 16:44:30.734952    8307 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 16:44:30.735026    8307 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 16:44:30.735065    8307 kubeadm.go:310] OS: Linux
	I0920 16:44:30.735115    8307 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 16:44:30.735166    8307 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 16:44:30.735217    8307 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 16:44:30.735267    8307 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 16:44:30.735317    8307 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 16:44:30.735369    8307 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 16:44:30.735436    8307 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 16:44:30.735488    8307 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 16:44:30.735542    8307 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 16:44:30.795132    8307 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 16:44:30.795323    8307 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 16:44:30.795479    8307 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 16:44:30.810672    8307 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 16:44:30.815601    8307 out.go:235]   - Generating certificates and keys ...
	I0920 16:44:30.815790    8307 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 16:44:30.815906    8307 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 16:44:30.999283    8307 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 16:44:31.570686    8307 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 16:44:31.792253    8307 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 16:44:32.367974    8307 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 16:44:32.831025    8307 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 16:44:32.831386    8307 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-877987 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 16:44:33.242865    8307 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 16:44:33.243212    8307 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-877987 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 16:44:33.676874    8307 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 16:44:34.184288    8307 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 16:44:34.458143    8307 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 16:44:34.458424    8307 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 16:44:34.783056    8307 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 16:44:34.918262    8307 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 16:44:35.298900    8307 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 16:44:36.537829    8307 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 16:44:36.961993    8307 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 16:44:36.962807    8307 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 16:44:36.967890    8307 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 16:44:36.970531    8307 out.go:235]   - Booting up control plane ...
	I0920 16:44:36.970643    8307 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 16:44:36.970733    8307 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 16:44:36.971341    8307 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 16:44:36.981636    8307 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 16:44:36.987729    8307 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 16:44:36.987789    8307 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 16:44:37.112429    8307 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 16:44:37.112587    8307 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 16:44:38.123155    8307 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.010677628s
	I0920 16:44:38.123244    8307 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 16:44:45.129468    8307 kubeadm.go:310] [api-check] The API server is healthy after 7.004799001s
	I0920 16:44:45.192817    8307 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 16:44:45.710220    8307 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 16:44:45.733042    8307 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 16:44:45.733259    8307 kubeadm.go:310] [mark-control-plane] Marking the node addons-877987 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 16:44:45.743782    8307 kubeadm.go:310] [bootstrap-token] Using token: 9d72in.480gru1u9ujudilj
	I0920 16:44:45.746276    8307 out.go:235]   - Configuring RBAC rules ...
	I0920 16:44:45.746424    8307 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 16:44:45.750868    8307 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 16:44:45.760064    8307 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 16:44:45.763708    8307 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 16:44:45.767663    8307 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 16:44:45.771474    8307 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 16:44:45.903922    8307 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 16:44:46.330245    8307 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 16:44:46.902979    8307 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 16:44:46.904158    8307 kubeadm.go:310] 
	I0920 16:44:46.904231    8307 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 16:44:46.904237    8307 kubeadm.go:310] 
	I0920 16:44:46.904333    8307 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 16:44:46.904349    8307 kubeadm.go:310] 
	I0920 16:44:46.904374    8307 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 16:44:46.904452    8307 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 16:44:46.904508    8307 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 16:44:46.904520    8307 kubeadm.go:310] 
	I0920 16:44:46.904578    8307 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 16:44:46.904586    8307 kubeadm.go:310] 
	I0920 16:44:46.904633    8307 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 16:44:46.904641    8307 kubeadm.go:310] 
	I0920 16:44:46.904692    8307 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 16:44:46.904774    8307 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 16:44:46.904846    8307 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 16:44:46.904855    8307 kubeadm.go:310] 
	I0920 16:44:46.904938    8307 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 16:44:46.905018    8307 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 16:44:46.905027    8307 kubeadm.go:310] 
	I0920 16:44:46.905109    8307 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9d72in.480gru1u9ujudilj \
	I0920 16:44:46.905214    8307 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d039dfc643410e269f96757eff51180894f5dc32e113f840efb2336fc2b49fa \
	I0920 16:44:46.905238    8307 kubeadm.go:310] 	--control-plane 
	I0920 16:44:46.905243    8307 kubeadm.go:310] 
	I0920 16:44:46.905326    8307 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 16:44:46.905330    8307 kubeadm.go:310] 
	I0920 16:44:46.905417    8307 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9d72in.480gru1u9ujudilj \
	I0920 16:44:46.905522    8307 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d039dfc643410e269f96757eff51180894f5dc32e113f840efb2336fc2b49fa 
	I0920 16:44:46.908253    8307 kubeadm.go:310] W0920 16:44:30.707226    1825 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:44:46.908552    8307 kubeadm.go:310] W0920 16:44:30.708784    1825 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:44:46.908771    8307 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 16:44:46.908887    8307 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 16:44:46.908912    8307 cni.go:84] Creating CNI manager for ""
	I0920 16:44:46.908929    8307 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 16:44:46.913103    8307 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 16:44:46.915405    8307 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 16:44:46.924333    8307 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 16:44:46.941664    8307 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 16:44:46.941787    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:46.941867    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-877987 minikube.k8s.io/updated_at=2024_09_20T16_44_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=addons-877987 minikube.k8s.io/primary=true
	I0920 16:44:47.121172    8307 ops.go:34] apiserver oom_adj: -16
	I0920 16:44:47.121342    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:47.621500    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:48.121416    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:48.622332    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:49.121672    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:49.622390    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:50.122224    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:50.622126    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:51.121731    8307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:51.235314    8307 kubeadm.go:1113] duration metric: took 4.293563606s to wait for elevateKubeSystemPrivileges
	I0920 16:44:51.235347    8307 kubeadm.go:394] duration metric: took 20.68156773s to StartCluster
	I0920 16:44:51.235366    8307 settings.go:142] acquiring lock: {Name:mk231bf5a5cfcfec5102639d93468a1e4a41c89f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:51.235493    8307 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-2235/kubeconfig
	I0920 16:44:51.235896    8307 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/kubeconfig: {Name:mk389b7f7c7d441a0f49101972b4f99c06538341 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:51.236093    8307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 16:44:51.236108    8307 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 16:44:51.236343    8307 config.go:182] Loaded profile config "addons-877987": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:44:51.236374    8307 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 16:44:51.236458    8307 addons.go:69] Setting yakd=true in profile "addons-877987"
	I0920 16:44:51.236474    8307 addons.go:234] Setting addon yakd=true in "addons-877987"
	I0920 16:44:51.236505    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.236958    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.237280    8307 addons.go:69] Setting inspektor-gadget=true in profile "addons-877987"
	I0920 16:44:51.237302    8307 addons.go:234] Setting addon inspektor-gadget=true in "addons-877987"
	I0920 16:44:51.237326    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.237748    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.237975    8307 addons.go:69] Setting metrics-server=true in profile "addons-877987"
	I0920 16:44:51.237994    8307 addons.go:234] Setting addon metrics-server=true in "addons-877987"
	I0920 16:44:51.238018    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.238566    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.242006    8307 addons.go:69] Setting cloud-spanner=true in profile "addons-877987"
	I0920 16:44:51.242086    8307 addons.go:234] Setting addon cloud-spanner=true in "addons-877987"
	I0920 16:44:51.242135    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.242576    8307 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-877987"
	I0920 16:44:51.242594    8307 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-877987"
	I0920 16:44:51.242614    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.243009    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.243511    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.243726    8307 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-877987"
	I0920 16:44:51.264444    8307 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-877987"
	I0920 16:44:51.264515    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.265007    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.243735    8307 addons.go:69] Setting default-storageclass=true in profile "addons-877987"
	I0920 16:44:51.277458    8307 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-877987"
	I0920 16:44:51.277828    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.243739    8307 addons.go:69] Setting gcp-auth=true in profile "addons-877987"
	I0920 16:44:51.294716    8307 mustload.go:65] Loading cluster: addons-877987
	I0920 16:44:51.294913    8307 config.go:182] Loaded profile config "addons-877987": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:44:51.295173    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.243743    8307 addons.go:69] Setting ingress=true in profile "addons-877987"
	I0920 16:44:51.306370    8307 addons.go:234] Setting addon ingress=true in "addons-877987"
	I0920 16:44:51.306422    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.306919    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.243747    8307 addons.go:69] Setting ingress-dns=true in profile "addons-877987"
	I0920 16:44:51.330411    8307 addons.go:234] Setting addon ingress-dns=true in "addons-877987"
	I0920 16:44:51.330461    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.330943    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.243773    8307 out.go:177] * Verifying Kubernetes components...
	I0920 16:44:51.244150    8307 addons.go:69] Setting volcano=true in profile "addons-877987"
	I0920 16:44:51.358608    8307 addons.go:234] Setting addon volcano=true in "addons-877987"
	I0920 16:44:51.358650    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.360048    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.244160    8307 addons.go:69] Setting registry=true in profile "addons-877987"
	I0920 16:44:51.364270    8307 addons.go:234] Setting addon registry=true in "addons-877987"
	I0920 16:44:51.364315    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.364798    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.244164    8307 addons.go:69] Setting storage-provisioner=true in profile "addons-877987"
	I0920 16:44:51.382232    8307 addons.go:234] Setting addon storage-provisioner=true in "addons-877987"
	I0920 16:44:51.382269    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.382866    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.244168    8307 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-877987"
	I0920 16:44:51.398425    8307 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-877987"
	I0920 16:44:51.398756    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.244802    8307 addons.go:69] Setting volumesnapshots=true in profile "addons-877987"
	I0920 16:44:51.407614    8307 addons.go:234] Setting addon volumesnapshots=true in "addons-877987"
	I0920 16:44:51.407649    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.408292    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.410619    8307 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 16:44:51.413489    8307 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 16:44:51.414389    8307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 16:44:51.438899    8307 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 16:44:51.438982    8307 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 16:44:51.439191    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.455920    8307 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 16:44:51.455942    8307 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 16:44:51.456006    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.478383    8307 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 16:44:51.478497    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 16:44:51.480679    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 16:44:51.480822    8307 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 16:44:51.480832    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 16:44:51.480892    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.485884    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 16:44:51.490404    8307 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 16:44:51.524246    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 16:44:51.524494    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.540524    8307 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 16:44:51.540543    8307 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 16:44:51.540644    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.541272    8307 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 16:44:51.546885    8307 addons.go:234] Setting addon default-storageclass=true in "addons-877987"
	I0920 16:44:51.549056    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.549481    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.552001    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 16:44:51.562131    8307 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:44:51.562151    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 16:44:51.562215    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.578424    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:44:51.594609    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 16:44:51.597848    8307 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-877987"
	I0920 16:44:51.599353    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:51.599847    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:51.598025    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 16:44:51.598039    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:44:51.599297    8307 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 16:44:51.601560    8307 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 16:44:51.604347    8307 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 16:44:51.608185    8307 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 16:44:51.608339    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.628955    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 16:44:51.633224    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 16:44:51.635111    8307 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 16:44:51.638096    8307 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 16:44:51.638126    8307 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 16:44:51.638193    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.645576    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.650553    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.654079    8307 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 16:44:51.654244    8307 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 16:44:51.654654    8307 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 16:44:51.654677    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 16:44:51.654744    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.668656    8307 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 16:44:51.668677    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 16:44:51.668732    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.671484    8307 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 16:44:51.672081    8307 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 16:44:51.676310    8307 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 16:44:51.676342    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 16:44:51.676406    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.686757    8307 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 16:44:51.688858    8307 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:44:51.688881    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 16:44:51.688948    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.713667    8307 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 16:44:51.713690    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 16:44:51.713751    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.718405    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.719412    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.724073    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.743004    8307 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 16:44:51.743023    8307 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 16:44:51.743448    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.761305    8307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 16:44:51.803002    8307 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 16:44:51.809499    8307 out.go:177]   - Using image docker.io/busybox:stable
	I0920 16:44:51.816468    8307 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:44:51.816502    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 16:44:51.816565    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:51.820786    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.828810    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.850946    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.852024    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.853262    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.874512    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.875196    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.888298    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.905351    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:51.921124    8307 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 16:44:52.475990    8307 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 16:44:52.476060    8307 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 16:44:52.511363    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 16:44:52.556877    8307 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 16:44:52.556954    8307 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 16:44:52.682242    8307 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 16:44:52.682268    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 16:44:52.893314    8307 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 16:44:52.893338    8307 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 16:44:52.948650    8307 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 16:44:52.948677    8307 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 16:44:52.967605    8307 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 16:44:52.967632    8307 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 16:44:53.097049    8307 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 16:44:53.097075    8307 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 16:44:53.133697    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:44:53.135729    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 16:44:53.139791    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 16:44:53.146777    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:44:53.150850    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 16:44:53.176005    8307 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 16:44:53.176045    8307 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 16:44:53.204782    8307 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 16:44:53.204821    8307 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 16:44:53.211275    8307 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 16:44:53.211304    8307 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 16:44:53.226801    8307 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 16:44:53.226829    8307 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 16:44:53.244750    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:44:53.245408    8307 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:44:53.245429    8307 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 16:44:53.252937    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 16:44:53.263386    8307 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 16:44:53.263427    8307 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 16:44:53.360075    8307 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:44:53.360100    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 16:44:53.390826    8307 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 16:44:53.390853    8307 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 16:44:53.394175    8307 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 16:44:53.394202    8307 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 16:44:53.418231    8307 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 16:44:53.418259    8307 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 16:44:53.490112    8307 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:44:53.490147    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 16:44:53.525667    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:44:53.543482    8307 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 16:44:53.543522    8307 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 16:44:53.608975    8307 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 16:44:53.609018    8307 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 16:44:53.626880    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:44:53.668730    8307 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 16:44:53.668772    8307 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 16:44:53.704424    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:44:53.793237    8307 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 16:44:53.793269    8307 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 16:44:53.836682    8307 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.075335052s)
	I0920 16:44:53.836713    8307 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 16:44:53.837729    8307 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.916582333s)
	I0920 16:44:53.838465    8307 node_ready.go:35] waiting up to 6m0s for node "addons-877987" to be "Ready" ...
	I0920 16:44:53.843719    8307 node_ready.go:49] node "addons-877987" has status "Ready":"True"
	I0920 16:44:53.843746    8307 node_ready.go:38] duration metric: took 5.256118ms for node "addons-877987" to be "Ready" ...
	I0920 16:44:53.843756    8307 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:44:53.853678    8307 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s4fhl" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:53.853965    8307 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 16:44:53.854095    8307 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 16:44:54.068073    8307 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:44:54.068149    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 16:44:54.262422    8307 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 16:44:54.262496    8307 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 16:44:54.317710    8307 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:44:54.317773    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 16:44:54.340644    8307 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-877987" context rescaled to 1 replicas
	I0920 16:44:54.364176    8307 pod_ready.go:93] pod "coredns-7c65d6cfc9-s4fhl" in "kube-system" namespace has status "Ready":"True"
	I0920 16:44:54.364250    8307 pod_ready.go:82] duration metric: took 510.260504ms for pod "coredns-7c65d6cfc9-s4fhl" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:54.364276    8307 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v86mg" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:54.370505    8307 pod_ready.go:93] pod "coredns-7c65d6cfc9-v86mg" in "kube-system" namespace has status "Ready":"True"
	I0920 16:44:54.370576    8307 pod_ready.go:82] duration metric: took 6.278163ms for pod "coredns-7c65d6cfc9-v86mg" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:54.370604    8307 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-877987" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:54.376457    8307 pod_ready.go:93] pod "etcd-addons-877987" in "kube-system" namespace has status "Ready":"True"
	I0920 16:44:54.376527    8307 pod_ready.go:82] duration metric: took 5.902408ms for pod "etcd-addons-877987" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:54.376553    8307 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-877987" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:54.411380    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:44:54.544026    8307 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 16:44:54.544109    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 16:44:54.691608    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:44:54.842676    8307 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 16:44:54.842747    8307 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 16:44:55.297198    8307 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 16:44:55.297272    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 16:44:56.019062    8307 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 16:44:56.019092    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 16:44:56.382727    8307 pod_ready.go:103] pod "kube-apiserver-addons-877987" in "kube-system" namespace has status "Ready":"False"
	I0920 16:44:56.460998    8307 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:44:56.461036    8307 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 16:44:57.028794    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:44:57.384220    8307 pod_ready.go:93] pod "kube-apiserver-addons-877987" in "kube-system" namespace has status "Ready":"True"
	I0920 16:44:57.384250    8307 pod_ready.go:82] duration metric: took 3.007675895s for pod "kube-apiserver-addons-877987" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:57.384263    8307 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-877987" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:57.393540    8307 pod_ready.go:93] pod "kube-controller-manager-addons-877987" in "kube-system" namespace has status "Ready":"True"
	I0920 16:44:57.393567    8307 pod_ready.go:82] duration metric: took 9.295462ms for pod "kube-controller-manager-addons-877987" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:57.393579    8307 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hxdck" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:57.443776    8307 pod_ready.go:93] pod "kube-proxy-hxdck" in "kube-system" namespace has status "Ready":"True"
	I0920 16:44:57.443803    8307 pod_ready.go:82] duration metric: took 50.216182ms for pod "kube-proxy-hxdck" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:57.443815    8307 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-877987" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:57.843454    8307 pod_ready.go:93] pod "kube-scheduler-addons-877987" in "kube-system" namespace has status "Ready":"True"
	I0920 16:44:57.843499    8307 pod_ready.go:82] duration metric: took 399.67668ms for pod "kube-scheduler-addons-877987" in "kube-system" namespace to be "Ready" ...
	I0920 16:44:57.843509    8307 pod_ready.go:39] duration metric: took 3.999740974s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:44:57.843528    8307 api_server.go:52] waiting for apiserver process to appear ...
	I0920 16:44:57.843607    8307 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:44:58.557797    8307 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 16:44:58.557950    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:58.586258    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:44:59.631435    8307 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 16:44:59.856223    8307 addons.go:234] Setting addon gcp-auth=true in "addons-877987"
	I0920 16:44:59.856323    8307 host.go:66] Checking if "addons-877987" exists ...
	I0920 16:44:59.856841    8307 cli_runner.go:164] Run: docker container inspect addons-877987 --format={{.State.Status}}
	I0920 16:44:59.882497    8307 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 16:44:59.882559    8307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-877987
	I0920 16:44:59.912994    8307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/addons-877987/id_rsa Username:docker}
	I0920 16:45:01.778665    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.26722825s)
	I0920 16:45:01.778708    8307 addons.go:475] Verifying addon ingress=true in "addons-877987"
	I0920 16:45:01.778758    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.644961375s)
	I0920 16:45:01.781503    8307 out.go:177] * Verifying ingress addon...
	I0920 16:45:01.784619    8307 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 16:45:01.790573    8307 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 16:45:01.790599    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:02.332055    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:02.839504    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:03.289301    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:03.493444    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.357668781s)
	I0920 16:45:03.493513    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.353685028s)
	I0920 16:45:03.493575    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.346774548s)
	I0920 16:45:03.493764    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.342889653s)
	I0920 16:45:03.493940    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.249163312s)
	I0920 16:45:03.494005    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.241043057s)
	I0920 16:45:03.494081    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.968389219s)
	I0920 16:45:03.494095    8307 addons.go:475] Verifying addon metrics-server=true in "addons-877987"
	I0920 16:45:03.494135    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.867230416s)
	I0920 16:45:03.494149    8307 addons.go:475] Verifying addon registry=true in "addons-877987"
	I0920 16:45:03.494454    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.790001206s)
	I0920 16:45:03.494596    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.083133722s)
	W0920 16:45:03.494621    8307 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:03.494650    8307 retry.go:31] will retry after 215.838179ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:03.494718    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.803034959s)
	I0920 16:45:03.497613    8307 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-877987 service yakd-dashboard -n yakd-dashboard
	
	I0920 16:45:03.497714    8307 out.go:177] * Verifying registry addon...
	I0920 16:45:03.503497    8307 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 16:45:03.520155    8307 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 16:45:03.520185    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0920 16:45:03.552565    8307 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 16:45:03.711400    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:03.801438    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:04.010911    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:04.289486    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:04.527349    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:04.765533    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.736675618s)
	I0920 16:45:04.765579    8307 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-877987"
	I0920 16:45:04.765784    8307 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.922158431s)
	I0920 16:45:04.765869    8307 api_server.go:72] duration metric: took 13.529739125s to wait for apiserver process to appear ...
	I0920 16:45:04.765891    8307 api_server.go:88] waiting for apiserver healthz status ...
	I0920 16:45:04.765909    8307 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 16:45:04.765997    8307 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.883474595s)
	I0920 16:45:04.768694    8307 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 16:45:04.768768    8307 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 16:45:04.770990    8307 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:04.771993    8307 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 16:45:04.773870    8307 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 16:45:04.773892    8307 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 16:45:04.791355    8307 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 16:45:04.792998    8307 api_server.go:141] control plane version: v1.31.1
	I0920 16:45:04.794523    8307 api_server.go:131] duration metric: took 28.624631ms to wait for apiserver health ...
	I0920 16:45:04.794574    8307 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 16:45:04.794264    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:04.794486    8307 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 16:45:04.794828    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:04.805767    8307 system_pods.go:59] 17 kube-system pods found
	I0920 16:45:04.805860    8307 system_pods.go:61] "coredns-7c65d6cfc9-v86mg" [ea32d161-0a4e-45c3-a5cc-6ae8fd180f7d] Running
	I0920 16:45:04.805887    8307 system_pods.go:61] "csi-hostpath-attacher-0" [1f75974e-07d7-4a96-8e80-0b65f501953f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:04.805926    8307 system_pods.go:61] "csi-hostpath-resizer-0" [19a86aec-4fe1-4f1b-8860-193df89cac24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:04.805959    8307 system_pods.go:61] "csi-hostpathplugin-zzsqz" [d4342400-11b5-4f45-93db-90e73a576254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:04.805985    8307 system_pods.go:61] "etcd-addons-877987" [7efeac93-eac9-4e6e-ba04-91346c442ea5] Running
	I0920 16:45:04.806012    8307 system_pods.go:61] "kube-apiserver-addons-877987" [bbd58a20-13fc-4b63-9b17-33ce089ae741] Running
	I0920 16:45:04.806043    8307 system_pods.go:61] "kube-controller-manager-addons-877987" [65055957-39c9-45d0-b5dd-bbac2ff32526] Running
	I0920 16:45:04.806071    8307 system_pods.go:61] "kube-ingress-dns-minikube" [0af9cd0c-aaef-4ff9-98ee-3a5c49360681] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0920 16:45:04.806103    8307 system_pods.go:61] "kube-proxy-hxdck" [9dc379c3-eb77-443b-a7fd-47c094a1b18a] Running
	I0920 16:45:04.806128    8307 system_pods.go:61] "kube-scheduler-addons-877987" [c15278e1-2255-408e-a68a-4e23ef4b7129] Running
	I0920 16:45:04.806181    8307 system_pods.go:61] "metrics-server-84c5f94fbc-gmqh2" [f6899345-ec86-427c-9cdd-46f043d24818] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:04.806206    8307 system_pods.go:61] "nvidia-device-plugin-daemonset-wrczs" [afc95ef0-9c2a-4b80-a5c8-3df87415fdcc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0920 16:45:04.806243    8307 system_pods.go:61] "registry-66c9cd494c-lmt9d" [2a3a6aaa-b147-4517-bdc2-529c58ed2d26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 16:45:04.806272    8307 system_pods.go:61] "registry-proxy-2wp2r" [c8ba5e64-c35c-4fdb-8dfb-ede028619b44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 16:45:04.806294    8307 system_pods.go:61] "snapshot-controller-56fcc65765-j8xlq" [61a890d1-ae33-4878-b779-02d606e1fe0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:04.806419    8307 system_pods.go:61] "snapshot-controller-56fcc65765-wmcvm" [026cd01c-d15b-4fb6-831d-db609208af92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:04.806462    8307 system_pods.go:61] "storage-provisioner" [e3eb2ce0-20de-46a5-8c65-043a2623eb44] Running
	I0920 16:45:04.806485    8307 system_pods.go:74] duration metric: took 11.880539ms to wait for pod list to return data ...
	I0920 16:45:04.806509    8307 default_sa.go:34] waiting for default service account to be created ...
	I0920 16:45:04.809500    8307 default_sa.go:45] found service account: "default"
	I0920 16:45:04.809561    8307 default_sa.go:55] duration metric: took 3.020877ms for default service account to be created ...
	I0920 16:45:04.809584    8307 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 16:45:04.819052    8307 system_pods.go:86] 17 kube-system pods found
	I0920 16:45:04.819131    8307 system_pods.go:89] "coredns-7c65d6cfc9-v86mg" [ea32d161-0a4e-45c3-a5cc-6ae8fd180f7d] Running
	I0920 16:45:04.819158    8307 system_pods.go:89] "csi-hostpath-attacher-0" [1f75974e-07d7-4a96-8e80-0b65f501953f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:04.819185    8307 system_pods.go:89] "csi-hostpath-resizer-0" [19a86aec-4fe1-4f1b-8860-193df89cac24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:04.819237    8307 system_pods.go:89] "csi-hostpathplugin-zzsqz" [d4342400-11b5-4f45-93db-90e73a576254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:04.819257    8307 system_pods.go:89] "etcd-addons-877987" [7efeac93-eac9-4e6e-ba04-91346c442ea5] Running
	I0920 16:45:04.819283    8307 system_pods.go:89] "kube-apiserver-addons-877987" [bbd58a20-13fc-4b63-9b17-33ce089ae741] Running
	I0920 16:45:04.819318    8307 system_pods.go:89] "kube-controller-manager-addons-877987" [65055957-39c9-45d0-b5dd-bbac2ff32526] Running
	I0920 16:45:04.819343    8307 system_pods.go:89] "kube-ingress-dns-minikube" [0af9cd0c-aaef-4ff9-98ee-3a5c49360681] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0920 16:45:04.819374    8307 system_pods.go:89] "kube-proxy-hxdck" [9dc379c3-eb77-443b-a7fd-47c094a1b18a] Running
	I0920 16:45:04.819405    8307 system_pods.go:89] "kube-scheduler-addons-877987" [c15278e1-2255-408e-a68a-4e23ef4b7129] Running
	I0920 16:45:04.819431    8307 system_pods.go:89] "metrics-server-84c5f94fbc-gmqh2" [f6899345-ec86-427c-9cdd-46f043d24818] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:04.819459    8307 system_pods.go:89] "nvidia-device-plugin-daemonset-wrczs" [afc95ef0-9c2a-4b80-a5c8-3df87415fdcc] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0920 16:45:04.819488    8307 system_pods.go:89] "registry-66c9cd494c-lmt9d" [2a3a6aaa-b147-4517-bdc2-529c58ed2d26] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 16:45:04.819520    8307 system_pods.go:89] "registry-proxy-2wp2r" [c8ba5e64-c35c-4fdb-8dfb-ede028619b44] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 16:45:04.819550    8307 system_pods.go:89] "snapshot-controller-56fcc65765-j8xlq" [61a890d1-ae33-4878-b779-02d606e1fe0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:04.819576    8307 system_pods.go:89] "snapshot-controller-56fcc65765-wmcvm" [026cd01c-d15b-4fb6-831d-db609208af92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:04.819600    8307 system_pods.go:89] "storage-provisioner" [e3eb2ce0-20de-46a5-8c65-043a2623eb44] Running
	I0920 16:45:04.819636    8307 system_pods.go:126] duration metric: took 10.032735ms to wait for k8s-apps to be running ...
	I0920 16:45:04.819663    8307 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 16:45:04.819760    8307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:45:04.913086    8307 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 16:45:04.913151    8307 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 16:45:05.002422    8307 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:05.002488    8307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 16:45:05.007462    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:05.066265    8307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:05.277630    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:05.289572    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:05.507831    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:05.777097    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:05.789276    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:06.010395    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:06.278591    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:06.290324    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:06.293420    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.581970119s)
	I0920 16:45:06.293554    8307 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.473766655s)
	I0920 16:45:06.293608    8307 system_svc.go:56] duration metric: took 1.473942471s WaitForService to wait for kubelet
	I0920 16:45:06.293641    8307 kubeadm.go:582] duration metric: took 15.057502251s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:45:06.293683    8307 node_conditions.go:102] verifying NodePressure condition ...
	I0920 16:45:06.299431    8307 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 16:45:06.299514    8307 node_conditions.go:123] node cpu capacity is 2
	I0920 16:45:06.299543    8307 node_conditions.go:105] duration metric: took 5.835406ms to run NodePressure ...
	I0920 16:45:06.299584    8307 start.go:241] waiting for startup goroutines ...
	I0920 16:45:06.481505    8307 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.41510123s)
	I0920 16:45:06.485045    8307 addons.go:475] Verifying addon gcp-auth=true in "addons-877987"
	I0920 16:45:06.488290    8307 out.go:177] * Verifying gcp-auth addon...
	I0920 16:45:06.491231    8307 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 16:45:06.498106    8307 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 16:45:06.601339    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:06.780453    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:06.790215    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:07.006880    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:07.277277    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:07.289553    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:07.507465    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:07.777270    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:07.793294    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:08.007647    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:08.277785    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:08.289211    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:08.507197    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:08.777840    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:08.788895    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:09.007315    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:09.277863    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:09.289078    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:09.507647    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:09.777095    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:09.789998    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:10.008349    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:10.277878    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:10.289485    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:10.507917    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:10.779792    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:10.789271    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:11.007052    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:11.277148    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:11.289531    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:11.507110    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:11.777872    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:11.789504    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:12.010460    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:12.278442    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:12.289210    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:12.507581    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:12.777180    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:12.789098    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:13.007908    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:13.281452    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:13.289398    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:13.596340    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:13.777637    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:13.789877    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:14.008153    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:14.278793    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:14.291032    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:14.507821    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:14.777983    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:14.789287    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:15.007567    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:15.276512    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:15.289160    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:15.507332    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:15.778543    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:15.790003    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.008521    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.277252    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:16.289379    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:16.508031    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.777732    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:16.789477    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:17.007499    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:17.278640    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.288617    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:17.507681    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:17.777254    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.788753    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:18.007320    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.277562    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.290109    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:18.507480    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.778264    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.789919    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:19.007129    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:19.276661    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.289287    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:19.508894    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:19.778764    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.789515    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:20.007413    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.298139    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.299443    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:20.510175    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.777777    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.789356    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:21.007223    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.278806    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.289408    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:21.508736    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.777465    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.789672    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.007657    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.277135    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.288691    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:22.508210    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.776973    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.789992    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:23.007746    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:23.277499    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:23.289916    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:23.507338    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:23.777885    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:23.789876    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:24.007355    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:24.277154    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.289457    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:24.506989    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:24.777128    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.789167    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:25.007517    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:25.278364    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.290142    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:25.507788    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:25.777676    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.789289    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:26.007822    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:26.276932    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.290162    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:26.507659    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:26.777466    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.790457    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:27.007708    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:27.278403    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.289843    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:27.507163    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:27.777578    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.793746    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:28.007117    8307 kapi.go:107] duration metric: took 24.503616984s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 16:45:28.276787    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.288460    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:28.777894    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.789981    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:29.276984    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.288924    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:29.777724    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.790224    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:30.277451    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.290842    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:30.776846    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.789167    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:31.276623    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.289346    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:31.777897    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.789175    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:32.276799    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.288875    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:32.778423    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.791172    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:33.278641    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.297129    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:33.785400    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.788996    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:34.279663    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.289832    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:34.780206    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.789532    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:35.280833    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.293528    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:35.778059    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.789397    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:36.276984    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.289229    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:36.777161    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.789074    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:37.277003    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.292096    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:37.777174    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.878379    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:38.277256    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.288901    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:38.776973    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.789186    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:39.277601    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.289159    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:39.778730    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.789424    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:40.277671    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.290446    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:40.777758    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.789130    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:41.277674    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.291854    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:41.777693    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.792206    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:42.293586    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:42.297483    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:42.776524    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:42.791014    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:43.277141    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:43.289062    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:43.777724    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:43.789653    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:44.277317    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:44.289609    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:44.776581    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:44.789899    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:45.314028    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:45.315680    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:45.777343    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:45.789262    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:46.279196    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:46.290912    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:46.776921    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:46.789166    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:47.277544    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:47.289290    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:47.798172    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:47.799100    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:48.277466    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:48.289682    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:48.779018    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:48.789756    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:49.276971    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:49.292509    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:49.777680    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:49.788670    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:50.277035    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:50.289332    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:50.777597    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:50.789985    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:51.277943    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:51.290517    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:51.778953    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:51.788926    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:52.279316    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:52.291471    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:52.777729    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:52.789560    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:53.277245    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:53.289305    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:53.776628    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:53.788625    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:54.277144    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:54.289494    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:54.777125    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:54.789452    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:55.277360    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:55.289497    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:55.777674    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:55.789740    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:56.277296    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:56.289187    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:56.776043    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:56.789276    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:57.276224    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:57.289379    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:57.776911    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:57.788647    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:58.276768    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:58.288816    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:58.777078    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:58.788954    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:59.280992    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:59.288903    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:45:59.777737    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:59.789582    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:00.288617    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:46:00.327321    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:00.777478    8307 kapi.go:107] duration metric: took 56.005480505s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 16:46:00.790429    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:01.289726    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:01.788779    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:02.290224    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:02.789469    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:03.289287    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:03.789681    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:04.289492    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:04.789664    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:05.290394    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:05.789853    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:06.289152    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:06.789934    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:07.289759    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:07.789801    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:08.291132    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:08.789547    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:09.289169    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:09.790603    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:10.289859    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:10.789524    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:11.289727    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:11.789583    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:12.290722    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:12.800053    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:13.292159    8307 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 16:46:13.795164    8307 kapi.go:107] duration metric: took 1m12.010544183s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 16:46:28.521701    8307 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 16:46:28.521730    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:28.995622    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:29.495711    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:29.995194    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:30.495567    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:30.994516    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:31.494446    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:31.994999    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:32.495588    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:32.995271    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:33.494729    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:33.995243    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:34.496260    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:34.995218    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:35.494477    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:35.995316    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:36.495115    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:36.994832    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:37.495291    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:37.995028    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:38.494895    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:38.994368    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:39.495699    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:39.994753    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:40.494802    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:40.994721    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:41.494599    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:41.995283    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:42.495423    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:42.994641    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:43.495805    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:43.994426    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:44.495685    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:44.994965    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:45.494496    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:45.995439    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:46.494491    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:46.995590    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:47.494634    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:47.995058    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:48.494831    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:48.995547    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:49.495746    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:49.994994    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:50.494611    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:50.994892    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:51.494373    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:51.995399    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:52.495612    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:52.995912    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:53.495248    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:53.995302    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:54.495786    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:54.994960    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:55.495109    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:55.994763    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:56.495845    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:56.994276    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:57.494716    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:57.995610    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:58.495383    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:58.994443    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:59.495497    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:59.995490    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:00.496184    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:00.994511    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:01.494634    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:01.994436    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:02.495505    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:02.994991    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:03.494658    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:03.995479    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:04.495394    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:04.996029    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:05.494952    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:05.994243    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:06.498296    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:06.994913    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:07.494286    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:07.994963    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:08.494943    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:08.995120    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:09.501265    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:09.995651    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:10.495830    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:10.994761    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:11.494600    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:11.996858    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:12.494610    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:12.994983    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:13.494874    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:13.994474    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:14.521495    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:14.996596    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:15.495270    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:15.994943    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:16.495495    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:16.995512    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:17.495131    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:17.994654    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:18.495210    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:18.994431    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:19.494853    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:19.994705    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:20.495397    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:20.995007    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:21.494273    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:21.995551    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:22.495639    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:22.995117    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:23.494968    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:23.994412    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:24.495400    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:24.996076    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:25.495160    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:25.995332    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:26.495542    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:26.994881    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:27.494869    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:27.995157    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:28.495544    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:28.995281    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:29.495613    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:29.997746    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:30.495592    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:30.995359    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:31.494863    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:31.994213    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:32.495389    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:32.995069    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:33.494760    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:33.994472    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:34.494923    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:34.996013    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:35.494603    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:35.995760    8307 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:47:36.494742    8307 kapi.go:107] duration metric: took 2m30.003509007s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 16:47:36.496805    8307 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-877987 cluster.
	I0920 16:47:36.499379    8307 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 16:47:36.501607    8307 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 16:47:36.503581    8307 out.go:177] * Enabled addons: nvidia-device-plugin, volcano, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 16:47:36.505559    8307 addons.go:510] duration metric: took 2m45.269176298s for enable addons: enabled=[nvidia-device-plugin volcano cloud-spanner storage-provisioner ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 16:47:36.505610    8307 start.go:246] waiting for cluster config update ...
	I0920 16:47:36.505633    8307 start.go:255] writing updated cluster config ...
	I0920 16:47:36.505927    8307 ssh_runner.go:195] Run: rm -f paused
	I0920 16:47:36.839434    8307 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 16:47:36.842034    8307 out.go:177] * Done! kubectl is now configured to use "addons-877987" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.111121595Z" level=info msg="ignoring event" container=2d4f3ecd371d587526f3619e531f1a32114deadae6034f63bb345c7dd6517b2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.149801357Z" level=info msg="ignoring event" container=86e93463680c00e19a959848d9e7bec0d19f26e8c2d3372dd1bf81b7792d16d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.155694548Z" level=info msg="ignoring event" container=2c4474d96214a5b9812925e3c930cb6d2f1b1f19ed2042b5ad945fa9b0dfd3da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.169114601Z" level=info msg="ignoring event" container=ff5151dfd70f983e9d2d4d5115c33368154fb3649ab974856b5887a1148c710d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.193442768Z" level=info msg="ignoring event" container=1f525cbc21478b035c7f450f9e1ce317d844ca65635ac2ba67a1f327350fa30e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.199913987Z" level=info msg="ignoring event" container=9aed4f1cb2be20360843bae6227424dbce9c5c631a431ffae5e0396459e5620f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.199971472Z" level=info msg="ignoring event" container=1d98b1744270b7bfe8de1baea9efcf98c0ce9c4481fff1502c2f9c022f68aa10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.207970353Z" level=info msg="ignoring event" container=d18b4e831ba0fc70f1b90c33002bba9688b79dda2c02cad9b4e85a2ba461d33d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.452107911Z" level=info msg="ignoring event" container=9121123dcdf4f35ea5aa3e6dcc2530fc3f28f1d371793a55aacfbc7247f24d82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.505675704Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=f0000207d445f43f traceID=bb3707071cea39e7303770ec4522f193
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.510694247Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=f0000207d445f43f traceID=bb3707071cea39e7303770ec4522f193
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.545389342Z" level=info msg="ignoring event" container=ac933261728a1569c1c0106e720235063700481d628289cceb7f66106bff06e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:12 addons-877987 dockerd[1288]: time="2024-09-20T16:57:12.571836771Z" level=info msg="ignoring event" container=1bacd7f5bd1752ce9f56dc6910476d9a80c4a0dc4fb9c9ce551b1e5029e4bc90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:18 addons-877987 dockerd[1288]: time="2024-09-20T16:57:18.671478917Z" level=info msg="ignoring event" container=9d613eb796bbcb7b61b59874825356ef24b431b9a5b945fef2cec6fcc7b18509 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:18 addons-877987 dockerd[1288]: time="2024-09-20T16:57:18.712821162Z" level=info msg="ignoring event" container=f73972937ebe67bdeefc2f21b28deb3b0f465bfa541b914dc71dadbbce29b802 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:18 addons-877987 dockerd[1288]: time="2024-09-20T16:57:18.873393785Z" level=info msg="ignoring event" container=69f1d9f0d4382cb7e93b54914b09b269a1a99b2515602ac20481ff96a082117d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:18 addons-877987 dockerd[1288]: time="2024-09-20T16:57:18.902734968Z" level=info msg="ignoring event" container=697285cfc8923bd46e879757680ec468321a1a78978e68fac4ed0f60647f7ef6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:26 addons-877987 dockerd[1288]: time="2024-09-20T16:57:26.419758791Z" level=info msg="ignoring event" container=e5a533500f4b94d142b194c09f37a7e7ee5eddf5c8a0a5a63f52346b55d1f87c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:26 addons-877987 dockerd[1288]: time="2024-09-20T16:57:26.534881761Z" level=info msg="ignoring event" container=c6032b475699b01866b1b92c0231890d6c545a30cfbe8e2ba2c78abcdf7c07f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:32 addons-877987 dockerd[1288]: time="2024-09-20T16:57:32.079582250Z" level=info msg="ignoring event" container=eb665ae20be3901fd4b10b49632c023ce24dd63f7d5beaddb5024f38b08e3b84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:32 addons-877987 dockerd[1288]: time="2024-09-20T16:57:32.841567608Z" level=info msg="ignoring event" container=ed7fe1013f05d10aa9e47b9c65f635615a00278ddb6fdfa102d21bfcdf106e16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:33 addons-877987 dockerd[1288]: time="2024-09-20T16:57:33.534250589Z" level=info msg="ignoring event" container=cd436dd64d8c0b14ac2aa4cd8cf47938781b4b3ee5399864fa282c6d660dbb9d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:33 addons-877987 dockerd[1288]: time="2024-09-20T16:57:33.634116011Z" level=info msg="ignoring event" container=7d34777f741bc88a0329ff2c8195d06bc1021dfd80fc5906b368c3165581b254 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:33 addons-877987 dockerd[1288]: time="2024-09-20T16:57:33.767967380Z" level=info msg="ignoring event" container=e52d0c93971f9d9663084ad2d71aaa4296eb50cb683d75e854e5571d0d34477e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:57:33 addons-877987 dockerd[1288]: time="2024-09-20T16:57:33.888660280Z" level=info msg="ignoring event" container=8e196733bf750c0ed6fc65450d08f3df59435b210f4362801ab955c9498a1b2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	3f2756d32e2a1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   2dcb03eed51c4       gcp-auth-89d5ffd79-w7ggl
	b96a36f62143c       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   5266c9fb66cfe       ingress-nginx-controller-bc57996ff-vh9tp
	fdecd440fe26f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   77894f5e64ded       ingress-nginx-admission-patch-wrlgh
	063206699a14b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   47581dbbca3a9       ingress-nginx-admission-create-bgvbz
	9bf3b4102ceb7       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   002af133052de       yakd-dashboard-67d98fc6b-k78vc
	efc1160507a15       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   2a9c76b446190       local-path-provisioner-86d989889c-4cqvt
	9df7b2d11d52d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   0bef5d5859fbc       kube-ingress-dns-minikube
	ec5f517850414       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   d961f1d35750d       cloud-spanner-emulator-769b77f747-9sj7c
	5ca8750f219f6       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   09e31ac874e26       nvidia-device-plugin-daemonset-wrczs
	30d1039837d31       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   adb7188809498       storage-provisioner
	4d29b087056f2       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   eb937a1327c9b       coredns-7c65d6cfc9-v86mg
	b2517fcb15811       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   1066775f0776f       kube-proxy-hxdck
	c74d0c39b85d1       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   8620ad9d9d533       kube-apiserver-addons-877987
	8e3c2eb108002       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   d0b09bedccefa       etcd-addons-877987
	36348c169e3a8       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   6c4dad70501bb       kube-controller-manager-addons-877987
	f3d7f7ae712af       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   1f68acf3a5676       kube-scheduler-addons-877987
	
	
	==> controller_ingress [b96a36f62143] <==
	NGINX Ingress controller
	  Release:       v1.11.2
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	I0920 16:46:12.789989       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0920 16:46:13.168078       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0920 16:46:13.188429       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0920 16:46:13.199743       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0920 16:46:13.221833       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"6de0e728-5a46-4b37-85fb-df1ed2391769", APIVersion:"v1", ResourceVersion:"703", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0920 16:46:13.230116       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"46d6e11b-1b13-4272-b46e-9f47a3d8cdac", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0920 16:46:13.230433       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"6ae4715f-6667-47b4-9cae-a714cb9916d4", APIVersion:"v1", ResourceVersion:"707", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0920 16:46:14.402556       7 nginx.go:317] "Starting NGINX process"
	I0920 16:46:14.402832       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0920 16:46:14.403007       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0920 16:46:14.403367       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0920 16:46:14.413397       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0920 16:46:14.414075       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-vh9tp"
	I0920 16:46:14.427134       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-vh9tp" node="addons-877987"
	I0920 16:46:14.447737       7 controller.go:213] "Backend successfully reloaded"
	I0920 16:46:14.447809       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0920 16:46:14.447956       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vh9tp", UID:"5b31605c-ae08-4755-8868-bd7183ac9d43", APIVersion:"v1", ResourceVersion:"735", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [4d29b087056f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33396 - 15335 "HINFO IN 7048061253374173460.3195082509755045569. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.038209005s
	[INFO] 10.244.0.25:54965 - 32540 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000292119s
	[INFO] 10.244.0.25:37161 - 43504 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00012683s
	[INFO] 10.244.0.25:58619 - 37862 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125706s
	[INFO] 10.244.0.25:45279 - 17498 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102542s
	[INFO] 10.244.0.25:40301 - 2204 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137226s
	[INFO] 10.244.0.25:57578 - 61190 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105717s
	[INFO] 10.244.0.25:59894 - 54813 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002208615s
	[INFO] 10.244.0.25:48563 - 46260 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002602159s
	[INFO] 10.244.0.25:35868 - 48728 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00297565s
	[INFO] 10.244.0.25:45646 - 49671 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003086011s
	
	
	==> describe nodes <==
	Name:               addons-877987
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-877987
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=addons-877987
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T16_44_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-877987
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 16:44:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-877987
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 16:57:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 16:56:51 +0000   Fri, 20 Sep 2024 16:44:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 16:56:51 +0000   Fri, 20 Sep 2024 16:44:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 16:56:51 +0000   Fri, 20 Sep 2024 16:44:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 16:56:51 +0000   Fri, 20 Sep 2024 16:44:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-877987
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa8de342cbf64be7abfbf8f5433a0ce7
	  System UUID:                12ec6b84-be42-4491-bd8d-cc388ab37e23
	  Boot ID:                    cfeac633-1b4b-4878-a7d1-bdd76da68a0f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-769b77f747-9sj7c     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-w7ggl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vh9tp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-v86mg                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-877987                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-877987                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-877987       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-hxdck                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-877987                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-wrczs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-4cqvt     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-k78vc              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (4%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-877987 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-877987 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-877987 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-877987 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-877987 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-877987 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-877987 event: Registered Node addons-877987 in Controller
	
	
	==> dmesg <==
	[Sep20 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014742] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507055] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.803986] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.089572] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [8e3c2eb10800] <==
	{"level":"info","ts":"2024-09-20T16:44:39.222385Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-20T16:44:39.217615Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T16:44:39.274390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T16:44:39.274436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T16:44:39.274460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-20T16:44:39.274481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:39.274487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:39.274498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:39.274505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:39.281355Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:39.286547Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-877987 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T16:44:39.286577Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T16:44:39.294411Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:39.294511Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:39.294535Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:39.294419Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T16:44:39.295061Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T16:44:39.295233Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T16:44:39.295997Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T16:44:39.296133Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T16:44:39.302440Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T16:44:39.302472Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T16:54:40.959788Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1879}
	{"level":"info","ts":"2024-09-20T16:54:41.008627Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1879,"took":"47.761985ms","hash":1115996737,"current-db-size-bytes":8843264,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4960256,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-20T16:54:41.008684Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1115996737,"revision":1879,"compact-revision":-1}
	
	
	==> gcp-auth [3f2756d32e2a] <==
	2024/09/20 16:47:36 GCP Auth Webhook started!
	2024/09/20 16:47:54 Ready to marshal response ...
	2024/09/20 16:47:54 Ready to write response ...
	2024/09/20 16:47:54 Ready to marshal response ...
	2024/09/20 16:47:54 Ready to write response ...
	2024/09/20 16:48:18 Ready to marshal response ...
	2024/09/20 16:48:18 Ready to write response ...
	2024/09/20 16:48:18 Ready to marshal response ...
	2024/09/20 16:48:18 Ready to write response ...
	2024/09/20 16:48:18 Ready to marshal response ...
	2024/09/20 16:48:18 Ready to write response ...
	2024/09/20 16:56:22 Ready to marshal response ...
	2024/09/20 16:56:22 Ready to write response ...
	2024/09/20 16:56:22 Ready to marshal response ...
	2024/09/20 16:56:22 Ready to write response ...
	2024/09/20 16:56:22 Ready to marshal response ...
	2024/09/20 16:56:22 Ready to write response ...
	2024/09/20 16:56:32 Ready to marshal response ...
	2024/09/20 16:56:32 Ready to write response ...
	2024/09/20 16:56:41 Ready to marshal response ...
	2024/09/20 16:56:41 Ready to write response ...
	2024/09/20 16:57:02 Ready to marshal response ...
	2024/09/20 16:57:02 Ready to write response ...
	
	
	==> kernel <==
	 16:57:35 up 40 min,  0 users,  load average: 0.33, 0.49, 0.47
	Linux addons-877987 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c74d0c39b85d] <==
	I0920 16:48:09.488053       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 16:48:09.850804       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 16:48:10.033100       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 16:48:10.132268       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 16:48:10.198461       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 16:48:10.488313       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 16:48:10.736565       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0920 16:56:22.219183       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.65.184"}
	I0920 16:56:48.574728       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0920 16:56:50.475196       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0920 16:57:18.386166       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:18.386219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:18.405670       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:18.405960       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:18.455161       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:18.455219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:18.465024       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:18.465877       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 16:57:18.537288       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 16:57:18.537366       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 16:57:19.465709       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 16:57:19.544320       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0920 16:57:19.562657       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0920 16:57:31.993584       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 16:57:33.020308       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [36348c169e3a] <==
	I0920 16:57:20.965778       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0920 16:57:20.965830       1 shared_informer.go:320] Caches are synced for garbage collector
	W0920 16:57:21.051379       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:21.051438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:21.060453       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:21.060501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:22.891335       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:22.891377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:23.656604       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:23.656647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:23.870896       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:23.870942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:57:25.317079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="4.701µs"
	W0920 16:57:25.528042       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:25.528086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:27.340374       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:27.340421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:27.377899       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:27.377942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:57:29.279972       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:29.280031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0920 16:57:33.022072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:57:33.456567       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.184µs"
	W0920 16:57:34.140435       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:57:34.140481       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [b2517fcb1581] <==
	I0920 16:44:52.297204       1 server_linux.go:66] "Using iptables proxy"
	I0920 16:44:52.405215       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 16:44:52.405274       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 16:44:52.463833       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 16:44:52.464015       1 server_linux.go:169] "Using iptables Proxier"
	I0920 16:44:52.466243       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 16:44:52.466637       1 server.go:483] "Version info" version="v1.31.1"
	I0920 16:44:52.466653       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 16:44:52.486341       1 config.go:199] "Starting service config controller"
	I0920 16:44:52.486373       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 16:44:52.486399       1 config.go:105] "Starting endpoint slice config controller"
	I0920 16:44:52.486403       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 16:44:52.487920       1 config.go:328] "Starting node config controller"
	I0920 16:44:52.487933       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 16:44:52.587459       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 16:44:52.587515       1 shared_informer.go:320] Caches are synced for service config
	I0920 16:44:52.588316       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f3d7f7ae712a] <==
	W0920 16:44:43.893201       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 16:44:43.893246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:43.894208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 16:44:43.894245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:43.894530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 16:44:43.894558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:43.894635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:43.894651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:43.894722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 16:44:43.894748       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:43.894810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:43.894838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:43.894909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 16:44:43.894930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:43.895005       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 16:44:43.895024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:43.895101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:43.895119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:43.895183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 16:44:43.895198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:43.895407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 16:44:43.895430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:44.763941       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 16:44:44.763992       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 16:44:46.382684       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 16:57:32 addons-877987 kubelet[2348]: I0920 16:57:32.423317    2348 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e0e8336-c68e-4cca-9118-71a3bca51144-host\") on node \"addons-877987\" DevicePath \"\""
	Sep 20 16:57:32 addons-877987 kubelet[2348]: I0920 16:57:32.423328    2348 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sjpgv\" (UniqueName: \"kubernetes.io/projected/0e0e8336-c68e-4cca-9118-71a3bca51144-kube-api-access-sjpgv\") on node \"addons-877987\" DevicePath \"\""
	Sep 20 16:57:32 addons-877987 kubelet[2348]: I0920 16:57:32.423337    2348 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/0e0e8336-c68e-4cca-9118-71a3bca51144-debugfs\") on node \"addons-877987\" DevicePath \"\""
	Sep 20 16:57:32 addons-877987 kubelet[2348]: I0920 16:57:32.935540    2348 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rwxg\" (UniqueName: \"kubernetes.io/projected/b3be4a97-bce2-4ea1-b338-c56f1b373bfc-kube-api-access-9rwxg\") pod \"b3be4a97-bce2-4ea1-b338-c56f1b373bfc\" (UID: \"b3be4a97-bce2-4ea1-b338-c56f1b373bfc\") "
	Sep 20 16:57:32 addons-877987 kubelet[2348]: I0920 16:57:32.936048    2348 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b3be4a97-bce2-4ea1-b338-c56f1b373bfc-gcp-creds\") pod \"b3be4a97-bce2-4ea1-b338-c56f1b373bfc\" (UID: \"b3be4a97-bce2-4ea1-b338-c56f1b373bfc\") "
	Sep 20 16:57:32 addons-877987 kubelet[2348]: I0920 16:57:32.936295    2348 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3be4a97-bce2-4ea1-b338-c56f1b373bfc-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b3be4a97-bce2-4ea1-b338-c56f1b373bfc" (UID: "b3be4a97-bce2-4ea1-b338-c56f1b373bfc"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 16:57:32 addons-877987 kubelet[2348]: I0920 16:57:32.938232    2348 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3be4a97-bce2-4ea1-b338-c56f1b373bfc-kube-api-access-9rwxg" (OuterVolumeSpecName: "kube-api-access-9rwxg") pod "b3be4a97-bce2-4ea1-b338-c56f1b373bfc" (UID: "b3be4a97-bce2-4ea1-b338-c56f1b373bfc"). InnerVolumeSpecName "kube-api-access-9rwxg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:57:33 addons-877987 kubelet[2348]: I0920 16:57:33.036891    2348 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9rwxg\" (UniqueName: \"kubernetes.io/projected/b3be4a97-bce2-4ea1-b338-c56f1b373bfc-kube-api-access-9rwxg\") on node \"addons-877987\" DevicePath \"\""
	Sep 20 16:57:33 addons-877987 kubelet[2348]: I0920 16:57:33.036935    2348 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b3be4a97-bce2-4ea1-b338-c56f1b373bfc-gcp-creds\") on node \"addons-877987\" DevicePath \"\""
	Sep 20 16:57:33 addons-877987 kubelet[2348]: I0920 16:57:33.946716    2348 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txtk8\" (UniqueName: \"kubernetes.io/projected/2a3a6aaa-b147-4517-bdc2-529c58ed2d26-kube-api-access-txtk8\") pod \"2a3a6aaa-b147-4517-bdc2-529c58ed2d26\" (UID: \"2a3a6aaa-b147-4517-bdc2-529c58ed2d26\") "
	Sep 20 16:57:33 addons-877987 kubelet[2348]: I0920 16:57:33.952378    2348 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3a6aaa-b147-4517-bdc2-529c58ed2d26-kube-api-access-txtk8" (OuterVolumeSpecName: "kube-api-access-txtk8") pod "2a3a6aaa-b147-4517-bdc2-529c58ed2d26" (UID: "2a3a6aaa-b147-4517-bdc2-529c58ed2d26"). InnerVolumeSpecName "kube-api-access-txtk8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.048775    2348 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fpp2\" (UniqueName: \"kubernetes.io/projected/c8ba5e64-c35c-4fdb-8dfb-ede028619b44-kube-api-access-5fpp2\") pod \"c8ba5e64-c35c-4fdb-8dfb-ede028619b44\" (UID: \"c8ba5e64-c35c-4fdb-8dfb-ede028619b44\") "
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.048912    2348 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-txtk8\" (UniqueName: \"kubernetes.io/projected/2a3a6aaa-b147-4517-bdc2-529c58ed2d26-kube-api-access-txtk8\") on node \"addons-877987\" DevicePath \"\""
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.059497    2348 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ba5e64-c35c-4fdb-8dfb-ede028619b44-kube-api-access-5fpp2" (OuterVolumeSpecName: "kube-api-access-5fpp2") pod "c8ba5e64-c35c-4fdb-8dfb-ede028619b44" (UID: "c8ba5e64-c35c-4fdb-8dfb-ede028619b44"). InnerVolumeSpecName "kube-api-access-5fpp2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.149736    2348 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5fpp2\" (UniqueName: \"kubernetes.io/projected/c8ba5e64-c35c-4fdb-8dfb-ede028619b44-kube-api-access-5fpp2\") on node \"addons-877987\" DevicePath \"\""
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.243153    2348 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0e8336-c68e-4cca-9118-71a3bca51144" path="/var/lib/kubelet/pods/0e0e8336-c68e-4cca-9118-71a3bca51144/volumes"
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.243620    2348 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3be4a97-bce2-4ea1-b338-c56f1b373bfc" path="/var/lib/kubelet/pods/b3be4a97-bce2-4ea1-b338-c56f1b373bfc/volumes"
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.379627    2348 scope.go:117] "RemoveContainer" containerID="cd436dd64d8c0b14ac2aa4cd8cf47938781b4b3ee5399864fa282c6d660dbb9d"
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.432611    2348 scope.go:117] "RemoveContainer" containerID="cd436dd64d8c0b14ac2aa4cd8cf47938781b4b3ee5399864fa282c6d660dbb9d"
	Sep 20 16:57:34 addons-877987 kubelet[2348]: E0920 16:57:34.433548    2348 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cd436dd64d8c0b14ac2aa4cd8cf47938781b4b3ee5399864fa282c6d660dbb9d" containerID="cd436dd64d8c0b14ac2aa4cd8cf47938781b4b3ee5399864fa282c6d660dbb9d"
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.433586    2348 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"cd436dd64d8c0b14ac2aa4cd8cf47938781b4b3ee5399864fa282c6d660dbb9d"} err="failed to get container status \"cd436dd64d8c0b14ac2aa4cd8cf47938781b4b3ee5399864fa282c6d660dbb9d\": rpc error: code = Unknown desc = Error response from daemon: No such container: cd436dd64d8c0b14ac2aa4cd8cf47938781b4b3ee5399864fa282c6d660dbb9d"
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.433613    2348 scope.go:117] "RemoveContainer" containerID="7d34777f741bc88a0329ff2c8195d06bc1021dfd80fc5906b368c3165581b254"
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.450228    2348 scope.go:117] "RemoveContainer" containerID="7d34777f741bc88a0329ff2c8195d06bc1021dfd80fc5906b368c3165581b254"
	Sep 20 16:57:34 addons-877987 kubelet[2348]: E0920 16:57:34.451252    2348 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 7d34777f741bc88a0329ff2c8195d06bc1021dfd80fc5906b368c3165581b254" containerID="7d34777f741bc88a0329ff2c8195d06bc1021dfd80fc5906b368c3165581b254"
	Sep 20 16:57:34 addons-877987 kubelet[2348]: I0920 16:57:34.451291    2348 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"7d34777f741bc88a0329ff2c8195d06bc1021dfd80fc5906b368c3165581b254"} err="failed to get container status \"7d34777f741bc88a0329ff2c8195d06bc1021dfd80fc5906b368c3165581b254\": rpc error: code = Unknown desc = Error response from daemon: No such container: 7d34777f741bc88a0329ff2c8195d06bc1021dfd80fc5906b368c3165581b254"
	
	
	==> storage-provisioner [30d1039837d3] <==
	I0920 16:44:59.144596       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 16:44:59.190855       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 16:44:59.190963       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 16:44:59.242703       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 16:44:59.242900       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-877987_e0f328b7-f5c1-4dec-879b-c5605a42c985!
	I0920 16:44:59.242974       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0de4a182-a193-4042-aa21-d5008f5727b2", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-877987_e0f328b7-f5c1-4dec-879b-c5605a42c985 became leader
	I0920 16:44:59.362269       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-877987_e0f328b7-f5c1-4dec-879b-c5605a42c985!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-877987 -n addons-877987
helpers_test.go:261: (dbg) Run:  kubectl --context addons-877987 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-bgvbz ingress-nginx-admission-patch-wrlgh
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-877987 describe pod busybox ingress-nginx-admission-create-bgvbz ingress-nginx-admission-patch-wrlgh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-877987 describe pod busybox ingress-nginx-admission-create-bgvbz ingress-nginx-admission-patch-wrlgh: exit status 1 (94.11342ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-877987/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 16:48:18 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nqr9d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nqr9d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-877987
	  Normal   Pulling    7m50s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m50s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m50s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m36s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m11s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bgvbz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wrlgh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-877987 describe pod busybox ingress-nginx-admission-create-bgvbz ingress-nginx-admission-patch-wrlgh: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.47s)

                                                
                                    

Test pass (318/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.68
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 4.88
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
22 TestOffline 88
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 222.38
29 TestAddons/serial/Volcano 41.16
31 TestAddons/serial/GCPAuth/Namespaces 0.2
34 TestAddons/parallel/Ingress 19.54
35 TestAddons/parallel/InspektorGadget 11.92
36 TestAddons/parallel/MetricsServer 6.74
38 TestAddons/parallel/CSI 40.69
39 TestAddons/parallel/Headlamp 16.63
40 TestAddons/parallel/CloudSpanner 5.58
41 TestAddons/parallel/LocalPath 52.36
42 TestAddons/parallel/NvidiaDevicePlugin 6.44
43 TestAddons/parallel/Yakd 10.68
44 TestAddons/StoppedEnableDisable 11.18
45 TestCertOptions 36.44
46 TestCertExpiration 252.55
47 TestDockerFlags 43.71
48 TestForceSystemdFlag 47.29
49 TestForceSystemdEnv 41.88
55 TestErrorSpam/setup 37.04
56 TestErrorSpam/start 0.69
57 TestErrorSpam/status 1
58 TestErrorSpam/pause 1.42
59 TestErrorSpam/unpause 1.47
60 TestErrorSpam/stop 10.92
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 72.17
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 32.93
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.65
72 TestFunctional/serial/CacheCmd/cache/add_local 1.14
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 43.28
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.15
83 TestFunctional/serial/LogsFileCmd 1.17
84 TestFunctional/serial/InvalidService 5.86
86 TestFunctional/parallel/ConfigCmd 0.51
87 TestFunctional/parallel/DashboardCmd 11.08
88 TestFunctional/parallel/DryRun 0.48
89 TestFunctional/parallel/InternationalLanguage 0.21
90 TestFunctional/parallel/StatusCmd 1.32
94 TestFunctional/parallel/ServiceCmdConnect 11.73
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 26.15
98 TestFunctional/parallel/SSHCmd 0.74
99 TestFunctional/parallel/CpCmd 2.55
101 TestFunctional/parallel/FileSync 0.32
102 TestFunctional/parallel/CertSync 2.08
106 TestFunctional/parallel/NodeLabels 0.09
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.33
110 TestFunctional/parallel/License 0.25
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.48
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.28
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
124 TestFunctional/parallel/ServiceCmd/List 0.63
125 TestFunctional/parallel/ProfileCmd/profile_list 0.51
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.63
129 TestFunctional/parallel/MountCmd/any-port 9.87
130 TestFunctional/parallel/ServiceCmd/Format 0.59
131 TestFunctional/parallel/ServiceCmd/URL 0.37
132 TestFunctional/parallel/MountCmd/specific-port 2.39
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.37
134 TestFunctional/parallel/Version/short 0.08
135 TestFunctional/parallel/Version/components 1.16
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.31
141 TestFunctional/parallel/ImageCommands/Setup 0.77
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.99
144 TestFunctional/parallel/DockerEnv/bash 1.32
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.31
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.86
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 124.14
160 TestMultiControlPlane/serial/DeployApp 8.31
161 TestMultiControlPlane/serial/PingHostFromPods 1.72
162 TestMultiControlPlane/serial/AddWorkerNode 28.24
163 TestMultiControlPlane/serial/NodeLabels 0.14
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.2
165 TestMultiControlPlane/serial/CopyFile 19.06
166 TestMultiControlPlane/serial/StopSecondaryNode 11.73
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
168 TestMultiControlPlane/serial/RestartSecondaryNode 123.23
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.99
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 174.11
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.92
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
173 TestMultiControlPlane/serial/StopCluster 32.82
174 TestMultiControlPlane/serial/RestartCluster 68.87
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.1
176 TestMultiControlPlane/serial/AddSecondaryNode 50.53
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
180 TestImageBuild/serial/Setup 31.39
181 TestImageBuild/serial/NormalBuild 1.95
182 TestImageBuild/serial/BuildWithBuildArg 0.95
183 TestImageBuild/serial/BuildWithDockerIgnore 0.8
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.83
188 TestJSONOutput/start/Command 75.73
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.56
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.55
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.92
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.21
213 TestKicCustomNetwork/create_custom_network 38.61
214 TestKicCustomNetwork/use_default_bridge_network 34.73
215 TestKicExistingNetwork 35.23
216 TestKicCustomSubnet 34.88
217 TestKicStaticIP 33.07
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 74.51
222 TestMountStart/serial/StartWithMountFirst 7.93
223 TestMountStart/serial/VerifyMountFirst 0.28
224 TestMountStart/serial/StartWithMountSecond 10.55
225 TestMountStart/serial/VerifyMountSecond 0.27
226 TestMountStart/serial/DeleteFirst 1.52
227 TestMountStart/serial/VerifyMountPostDelete 0.26
228 TestMountStart/serial/Stop 1.19
229 TestMountStart/serial/RestartStopped 8.63
230 TestMountStart/serial/VerifyMountPostStop 0.26
233 TestMultiNode/serial/FreshStart2Nodes 83.9
234 TestMultiNode/serial/DeployApp2Nodes 52.38
235 TestMultiNode/serial/PingHostFrom2Pods 1.04
236 TestMultiNode/serial/AddNode 18.89
237 TestMultiNode/serial/MultiNodeLabels 0.11
238 TestMultiNode/serial/ProfileList 0.76
239 TestMultiNode/serial/CopyFile 10.01
240 TestMultiNode/serial/StopNode 2.27
241 TestMultiNode/serial/StartAfterStop 11.41
242 TestMultiNode/serial/RestartKeepsNodes 98.88
243 TestMultiNode/serial/DeleteNode 5.69
244 TestMultiNode/serial/StopMultiNode 21.85
245 TestMultiNode/serial/RestartMultiNode 58.73
246 TestMultiNode/serial/ValidateNameConflict 38.51
251 TestPreload 142.72
253 TestScheduledStopUnix 105.14
254 TestSkaffold 120.92
256 TestInsufficientStorage 11.32
257 TestRunningBinaryUpgrade 88.18
259 TestKubernetesUpgrade 380.92
260 TestMissingContainerUpgrade 123.81
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
263 TestNoKubernetes/serial/StartWithK8s 46.1
264 TestNoKubernetes/serial/StartWithStopK8s 18.84
265 TestNoKubernetes/serial/Start 7.11
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
267 TestNoKubernetes/serial/ProfileList 1.43
268 TestNoKubernetes/serial/Stop 1.25
269 TestNoKubernetes/serial/StartNoArgs 7.42
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
282 TestStoppedBinaryUpgrade/Setup 1.44
283 TestStoppedBinaryUpgrade/Upgrade 134.84
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.35
293 TestPause/serial/Start 48.34
294 TestPause/serial/SecondStartNoReconfiguration 35.75
295 TestPause/serial/Pause 0.6
296 TestPause/serial/VerifyStatus 0.32
297 TestPause/serial/Unpause 0.54
298 TestPause/serial/PauseAgain 0.87
299 TestPause/serial/DeletePaused 2.12
300 TestPause/serial/VerifyDeletedResources 0.37
301 TestNetworkPlugins/group/auto/Start 51.95
302 TestNetworkPlugins/group/auto/KubeletFlags 0.34
303 TestNetworkPlugins/group/auto/NetCatPod 9.33
304 TestNetworkPlugins/group/auto/DNS 0.26
305 TestNetworkPlugins/group/auto/Localhost 0.16
306 TestNetworkPlugins/group/auto/HairPin 0.17
307 TestNetworkPlugins/group/kindnet/Start 74.04
308 TestNetworkPlugins/group/calico/Start 74.78
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
311 TestNetworkPlugins/group/kindnet/NetCatPod 13.46
312 TestNetworkPlugins/group/kindnet/DNS 0.33
313 TestNetworkPlugins/group/kindnet/Localhost 0.27
314 TestNetworkPlugins/group/kindnet/HairPin 0.26
315 TestNetworkPlugins/group/calico/ControllerPod 6.01
316 TestNetworkPlugins/group/calico/KubeletFlags 0.42
317 TestNetworkPlugins/group/calico/NetCatPod 15.44
318 TestNetworkPlugins/group/custom-flannel/Start 65.86
319 TestNetworkPlugins/group/calico/DNS 0.25
320 TestNetworkPlugins/group/calico/Localhost 0.25
321 TestNetworkPlugins/group/calico/HairPin 0.22
322 TestNetworkPlugins/group/false/Start 76.19
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.41
325 TestNetworkPlugins/group/custom-flannel/DNS 0.2
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
328 TestNetworkPlugins/group/enable-default-cni/Start 70
329 TestNetworkPlugins/group/false/KubeletFlags 0.36
330 TestNetworkPlugins/group/false/NetCatPod 12.33
331 TestNetworkPlugins/group/false/DNS 0.34
332 TestNetworkPlugins/group/false/Localhost 0.24
333 TestNetworkPlugins/group/false/HairPin 0.26
334 TestNetworkPlugins/group/flannel/Start 58.73
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.35
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
340 TestNetworkPlugins/group/bridge/Start 78.19
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
343 TestNetworkPlugins/group/flannel/NetCatPod 11.36
344 TestNetworkPlugins/group/flannel/DNS 0.28
345 TestNetworkPlugins/group/flannel/Localhost 0.21
346 TestNetworkPlugins/group/flannel/HairPin 0.18
347 TestNetworkPlugins/group/kubenet/Start 82.4
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
349 TestNetworkPlugins/group/bridge/NetCatPod 11.3
350 TestNetworkPlugins/group/bridge/DNS 0.19
351 TestNetworkPlugins/group/bridge/Localhost 0.16
352 TestNetworkPlugins/group/bridge/HairPin 0.17
354 TestStartStop/group/old-k8s-version/serial/FirstStart 175.63
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.43
356 TestNetworkPlugins/group/kubenet/NetCatPod 9.42
357 TestNetworkPlugins/group/kubenet/DNS 0.25
358 TestNetworkPlugins/group/kubenet/Localhost 0.21
359 TestNetworkPlugins/group/kubenet/HairPin 0.22
361 TestStartStop/group/no-preload/serial/FirstStart 81.71
362 TestStartStop/group/no-preload/serial/DeployApp 8.37
363 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
364 TestStartStop/group/no-preload/serial/Stop 10.93
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
366 TestStartStop/group/no-preload/serial/SecondStart 268.02
367 TestStartStop/group/old-k8s-version/serial/DeployApp 9.6
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
369 TestStartStop/group/old-k8s-version/serial/Stop 11.07
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
371 TestStartStop/group/old-k8s-version/serial/SecondStart 137.42
372 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.13
374 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
375 TestStartStop/group/old-k8s-version/serial/Pause 2.82
377 TestStartStop/group/embed-certs/serial/FirstStart 47.24
378 TestStartStop/group/embed-certs/serial/DeployApp 9.39
379 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.53
380 TestStartStop/group/embed-certs/serial/Stop 11.14
381 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
383 TestStartStop/group/embed-certs/serial/SecondStart 270.76
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
385 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
386 TestStartStop/group/no-preload/serial/Pause 4.29
388 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.58
389 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.47
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
391 TestStartStop/group/default-k8s-diff-port/serial/Stop 11
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.66
394 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
396 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
397 TestStartStop/group/embed-certs/serial/Pause 2.96
399 TestStartStop/group/newest-cni/serial/FirstStart 36.76
400 TestStartStop/group/newest-cni/serial/DeployApp 0
401 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.22
402 TestStartStop/group/newest-cni/serial/Stop 9.63
403 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
404 TestStartStop/group/newest-cni/serial/SecondStart 20.09
405 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
406 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
408 TestStartStop/group/newest-cni/serial/Pause 3.61
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.75
x
+
TestDownloadOnly/v1.20.0/json-events (13.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-923497 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-923497 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.679779749s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 16:43:47.390122    7542 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 16:43:47.390199    7542 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-2235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-923497
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-923497: exit status 85 (70.274952ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-923497 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |          |
	|         | -p download-only-923497        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:43:33
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:43:33.749203    7547 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:43:33.749316    7547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:33.749325    7547 out.go:358] Setting ErrFile to fd 2...
	I0920 16:43:33.749331    7547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:33.749581    7547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
	W0920 16:43:33.749707    7547 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19672-2235/.minikube/config/config.json: open /home/jenkins/minikube-integration/19672-2235/.minikube/config/config.json: no such file or directory
	I0920 16:43:33.750097    7547 out.go:352] Setting JSON to true
	I0920 16:43:33.750860    7547 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1565,"bootTime":1726849049,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 16:43:33.750935    7547 start.go:139] virtualization:  
	I0920 16:43:33.754200    7547 out.go:97] [download-only-923497] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0920 16:43:33.754345    7547 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-2235/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 16:43:33.754384    7547 notify.go:220] Checking for updates...
	I0920 16:43:33.757125    7547 out.go:169] MINIKUBE_LOCATION=19672
	I0920 16:43:33.759763    7547 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:43:33.762526    7547 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig
	I0920 16:43:33.764706    7547 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube
	I0920 16:43:33.766742    7547 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 16:43:33.771069    7547 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 16:43:33.771340    7547 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:43:33.792476    7547 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 16:43:33.792579    7547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:43:34.111335    7547 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 16:43:34.099918826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 16:43:34.111447    7547 docker.go:318] overlay module found
	I0920 16:43:34.114096    7547 out.go:97] Using the docker driver based on user configuration
	I0920 16:43:34.114131    7547 start.go:297] selected driver: docker
	I0920 16:43:34.114161    7547 start.go:901] validating driver "docker" against <nil>
	I0920 16:43:34.114284    7547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:43:34.174584    7547 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 16:43:34.165667253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 16:43:34.174789    7547 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:43:34.175068    7547 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 16:43:34.175237    7547 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 16:43:34.178542    7547 out.go:169] Using Docker driver with root privileges
	I0920 16:43:34.182071    7547 cni.go:84] Creating CNI manager for ""
	I0920 16:43:34.182132    7547 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 16:43:34.182224    7547 start.go:340] cluster config:
	{Name:download-only-923497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-923497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:43:34.184478    7547 out.go:97] Starting "download-only-923497" primary control-plane node in "download-only-923497" cluster
	I0920 16:43:34.184499    7547 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 16:43:34.186690    7547 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0920 16:43:34.186717    7547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 16:43:34.186872    7547 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 16:43:34.202993    7547 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 16:43:34.203164    7547 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 16:43:34.203264    7547 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 16:43:34.247498    7547 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 16:43:34.247532    7547 cache.go:56] Caching tarball of preloaded images
	I0920 16:43:34.247700    7547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 16:43:34.250489    7547 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 16:43:34.250516    7547 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 16:43:34.335411    7547 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19672-2235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 16:43:38.234500    7547 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 16:43:38.234612    7547 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19672-2235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 16:43:39.234453    7547 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 16:43:39.234847    7547 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/download-only-923497/config.json ...
	I0920 16:43:39.234882    7547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/download-only-923497/config.json: {Name:mk516b0bd27fc642ef4140591eee8fa0a94cf917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:43:39.235062    7547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 16:43:39.235247    7547 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19672-2235/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-923497 host does not exist
	  To start a cluster, run: "minikube start -p download-only-923497"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-923497
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-777196 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-777196 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.874817409s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 16:43:52.669911    7542 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 16:43:52.669950    7542 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-2235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-777196
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-777196: exit status 85 (68.263124ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-923497 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p download-only-923497        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| delete  | -p download-only-923497        | download-only-923497 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| start   | -o=json --download-only        | download-only-777196 | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p download-only-777196        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:43:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:43:47.833451    7751 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:43:47.833562    7751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:47.833574    7751 out.go:358] Setting ErrFile to fd 2...
	I0920 16:43:47.833580    7751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:47.833898    7751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
	I0920 16:43:47.834521    7751 out.go:352] Setting JSON to true
	I0920 16:43:47.835222    7751 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1579,"bootTime":1726849049,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 16:43:47.835283    7751 start.go:139] virtualization:  
	I0920 16:43:47.837931    7751 out.go:97] [download-only-777196] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 16:43:47.838128    7751 notify.go:220] Checking for updates...
	I0920 16:43:47.840263    7751 out.go:169] MINIKUBE_LOCATION=19672
	I0920 16:43:47.842257    7751 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:43:47.845089    7751 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig
	I0920 16:43:47.847364    7751 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube
	I0920 16:43:47.849573    7751 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 16:43:47.854043    7751 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 16:43:47.854378    7751 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:43:47.876450    7751 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 16:43:47.876558    7751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:43:47.932934    7751 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 16:43:47.92407147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 16:43:47.933047    7751 docker.go:318] overlay module found
	I0920 16:43:47.935333    7751 out.go:97] Using the docker driver based on user configuration
	I0920 16:43:47.935364    7751 start.go:297] selected driver: docker
	I0920 16:43:47.935372    7751 start.go:901] validating driver "docker" against <nil>
	I0920 16:43:47.935485    7751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:43:47.980793    7751 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 16:43:47.971797167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 16:43:47.980998    7751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:43:47.981274    7751 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 16:43:47.981421    7751 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 16:43:47.983630    7751 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-777196 host does not exist
	  To start a cluster, run: "minikube start -p download-only-777196"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-777196
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 16:43:53.857890    7542 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-528781 --alsologtostderr --binary-mirror http://127.0.0.1:37459 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-528781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-528781
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-290502 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-290502 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m25.819058494s)
helpers_test.go:175: Cleaning up "offline-docker-290502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-290502
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-290502: (2.180546007s)
--- PASS: TestOffline (88.00s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-877987
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-877987: exit status 85 (60.484312ms)

                                                
                                                
-- stdout --
	* Profile "addons-877987" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-877987"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-877987
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-877987: exit status 85 (67.566245ms)

                                                
                                                
-- stdout --
	* Profile "addons-877987" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-877987"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (222.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-877987 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-877987 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m42.374365888s)
--- PASS: TestAddons/Setup (222.38s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.16s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 46.221604ms
addons_test.go:843: volcano-admission stabilized in 47.092798ms
addons_test.go:835: volcano-scheduler stabilized in 47.152254ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-vtztx" [5e44aeff-b677-4e76-af42-46dd3cf4b194] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003938214s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-6bqvs" [22a828b3-79a9-4b40-8ba5-406cb38b3b1f] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.004272488s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-nvdt9" [30a0832b-2e27-4315-86a2-72229a3474e3] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.009301886s
addons_test.go:870: (dbg) Run:  kubectl --context addons-877987 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-877987 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-877987 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [48552437-6033-4c28-85fc-d12b38b7a115] Pending
helpers_test.go:344: "test-job-nginx-0" [48552437-6033-4c28-85fc-d12b38b7a115] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [48552437-6033-4c28-85fc-d12b38b7a115] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004097102s
addons_test.go:906: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-arm64 -p addons-877987 addons disable volcano --alsologtostderr -v=1: (10.469783405s)
--- PASS: TestAddons/serial/Volcano (41.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-877987 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-877987 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-877987 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-877987 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-877987 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a95bb8a3-1cce-4fd9-a386-87632d1c1b41] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a95bb8a3-1cce-4fd9-a386-87632d1c1b41] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004861091s
I0920 16:57:45.497264    7542 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-877987 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-877987 addons disable ingress-dns --alsologtostderr -v=1: (1.263081249s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-877987 addons disable ingress --alsologtostderr -v=1: (7.704350298s)
--- PASS: TestAddons/parallel/Ingress (19.54s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fmbhf" [0e0e8336-c68e-4cca-9118-71a3bca51144] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004011236s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-877987
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-877987: (5.915088605s)
--- PASS: TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.725841ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-gmqh2" [f6899345-ec86-427c-9cdd-46f043d24818] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003966166s
addons_test.go:413: (dbg) Run:  kubectl --context addons-877987 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 16:56:38.072336    7542 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 16:56:38.079203    7542 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 16:56:38.079979    7542 kapi.go:107] duration metric: took 7.65106ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.871316ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-877987 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-877987 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ca8d8327-bd9e-4885-ba89-f44b2c28b7d5] Pending
helpers_test.go:344: "task-pv-pod" [ca8d8327-bd9e-4885-ba89-f44b2c28b7d5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ca8d8327-bd9e-4885-ba89-f44b2c28b7d5] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003410826s
addons_test.go:528: (dbg) Run:  kubectl --context addons-877987 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-877987 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-877987 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-877987 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-877987 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-877987 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-877987 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [eab33470-f6af-45b7-a52d-ab21edc5c602] Pending
helpers_test.go:344: "task-pv-pod-restore" [eab33470-f6af-45b7-a52d-ab21edc5c602] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [eab33470-f6af-45b7-a52d-ab21edc5c602] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003710828s
addons_test.go:570: (dbg) Run:  kubectl --context addons-877987 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-877987 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-877987 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-877987 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.675227392s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.69s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-877987 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-5vp66" [e6154e37-edbb-4fe4-84a4-5d979311817c] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-5vp66" [e6154e37-edbb-4fe4-84a4-5d979311817c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-5vp66" [e6154e37-edbb-4fe4-84a4-5d979311817c] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003285471s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-877987 addons disable headlamp --alsologtostderr -v=1: (5.756481134s)
--- PASS: TestAddons/parallel/Headlamp (16.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-9sj7c" [0dbe94b4-3717-4a7c-a586-117cf75b3c32] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005418535s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-877987
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-877987 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-877987 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-877987 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [276bcd0b-9be3-4112-b548-e98e8e46b1ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [276bcd0b-9be3-4112-b548-e98e8e46b1ec] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [276bcd0b-9be3-4112-b548-e98e8e46b1ec] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003629421s
addons_test.go:938: (dbg) Run:  kubectl --context addons-877987 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 ssh "cat /opt/local-path-provisioner/pvc-4ba9ddfc-0fa1-4d32-8e17-ec12aa50f5d5_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-877987 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-877987 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-877987 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.286491946s)
--- PASS: TestAddons/parallel/LocalPath (52.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wrczs" [afc95ef0-9c2a-4b80-a5c8-3df87415fdcc] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003798381s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-877987
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-k78vc" [cfc6bd1a-53af-4517-8d6d-53a2062a23b2] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004020175s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-877987 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-877987 addons disable yakd --alsologtostderr -v=1: (5.674687761s)
--- PASS: TestAddons/parallel/Yakd (10.68s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-877987
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-877987: (10.920544062s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-877987
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-877987
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-877987
--- PASS: TestAddons/StoppedEnableDisable (11.18s)

                                                
                                    
x
+
TestCertOptions (36.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-710146 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0920 17:37:41.225374    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-710146 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (33.640448726s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-710146 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-710146 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-710146 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-710146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-710146
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-710146: (2.145583027s)
--- PASS: TestCertOptions (36.44s)

                                                
                                    
x
+
TestCertExpiration (252.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-097845 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0920 17:37:36.883106    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-097845 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (41.760553292s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-097845 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-097845 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (28.248478676s)
helpers_test.go:175: Cleaning up "cert-expiration-097845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-097845
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-097845: (2.541897098s)
--- PASS: TestCertExpiration (252.55s)

                                                
                                    
x
+
TestDockerFlags (43.71s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-101899 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-101899 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.475706717s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-101899 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-101899 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-101899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-101899
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-101899: (2.384262637s)
--- PASS: TestDockerFlags (43.71s)

                                                
                                    
x
+
TestForceSystemdFlag (47.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-130693 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-130693 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.057962964s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-130693 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-130693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-130693
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-130693: (3.835719362s)
--- PASS: TestForceSystemdFlag (47.29s)

                                                
                                    
x
+
TestForceSystemdEnv (41.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-117744 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-117744 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.267517796s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-117744 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-117744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-117744
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-117744: (2.223293942s)
--- PASS: TestForceSystemdEnv (41.88s)

                                                
                                    
x
+
TestErrorSpam/setup (37.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-970019 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-970019 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-970019 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-970019 --driver=docker  --container-runtime=docker: (37.040434334s)
--- PASS: TestErrorSpam/setup (37.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (10.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 stop: (10.710166687s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-970019 --log_dir /tmp/nospam-970019 stop
--- PASS: TestErrorSpam/stop (10.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19672-2235/.minikube/files/etc/test/nested/copy/7542/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (72.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-108853 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-108853 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m12.166123031s)
--- PASS: TestFunctional/serial/StartWithProxy (72.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.93s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 17:01:07.602071    7542 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-108853 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-108853 --alsologtostderr -v=8: (32.923171929s)
functional_test.go:663: soft start took 32.925891669s for "functional-108853" cluster.
I0920 17:01:40.525583    7542 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (32.93s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-108853 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-108853 cache add registry.k8s.io/pause:3.1: (1.530472982s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-108853 cache add registry.k8s.io/pause:3.3: (1.207504228s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-108853 /tmp/TestFunctionalserialCacheCmdcacheadd_local3706329445/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 cache add minikube-local-cache-test:functional-108853
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 cache delete minikube-local-cache-test:functional-108853
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-108853
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-108853 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.116628ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 kubectl -- --context functional-108853 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-108853 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-108853 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-108853 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.278987874s)
functional_test.go:761: restart took 43.283347693s for "functional-108853" cluster.
I0920 17:02:31.120723    7542 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (43.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-108853 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-108853 logs: (1.15390021s)
--- PASS: TestFunctional/serial/LogsCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 logs --file /tmp/TestFunctionalserialLogsFileCmd1634126937/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-108853 logs --file /tmp/TestFunctionalserialLogsFileCmd1634126937/001/logs.txt: (1.167608206s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-108853 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-108853
E0920 17:02:36.887320    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:36.893747    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:36.906435    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:36.927998    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:36.969534    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:37.051078    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:37.212815    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-108853: exit status 115 (595.887793ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32150 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-108853 delete -f testdata/invalidsvc.yaml
E0920 17:02:37.535075    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:02:38.177065    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2327: (dbg) Done: kubectl --context functional-108853 delete -f testdata/invalidsvc.yaml: (2.004464354s)
--- PASS: TestFunctional/serial/InvalidService (5.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-108853 config get cpus: exit status 14 (79.595534ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-108853 config get cpus: exit status 14 (70.154347ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-108853 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-108853 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49104: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-108853 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-108853 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (212.18265ms)

                                                
                                                
-- stdout --
	* [functional-108853] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:03:12.479278   48759 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:03:12.479419   48759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:03:12.479430   48759 out.go:358] Setting ErrFile to fd 2...
	I0920 17:03:12.479437   48759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:03:12.479693   48759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
	I0920 17:03:12.480481   48759 out.go:352] Setting JSON to false
	I0920 17:03:12.481496   48759 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2744,"bootTime":1726849049,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 17:03:12.481573   48759 start.go:139] virtualization:  
	I0920 17:03:12.484261   48759 out.go:177] * [functional-108853] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 17:03:12.486647   48759 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:03:12.486811   48759 notify.go:220] Checking for updates...
	I0920 17:03:12.491416   48759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:03:12.493621   48759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig
	I0920 17:03:12.496254   48759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube
	I0920 17:03:12.498218   48759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 17:03:12.500557   48759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:03:12.503498   48759 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:03:12.504125   48759 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:03:12.548697   48759 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:03:12.548831   48759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:03:12.624132   48759 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 17:03:12.614205921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:03:12.624322   48759 docker.go:318] overlay module found
	I0920 17:03:12.628165   48759 out.go:177] * Using the docker driver based on existing profile
	I0920 17:03:12.630795   48759 start.go:297] selected driver: docker
	I0920 17:03:12.630824   48759 start.go:901] validating driver "docker" against &{Name:functional-108853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-108853 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:03:12.630952   48759 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:03:12.635087   48759 out.go:201] 
	W0920 17:03:12.638214   48759 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 17:03:12.641211   48759 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-108853 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-108853 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-108853 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (209.089555ms)

                                                
                                                
-- stdout --
	* [functional-108853] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:03:12.288049   48715 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:03:12.288265   48715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:03:12.288295   48715 out.go:358] Setting ErrFile to fd 2...
	I0920 17:03:12.288317   48715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:03:12.288789   48715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
	I0920 17:03:12.289347   48715 out.go:352] Setting JSON to false
	I0920 17:03:12.290349   48715 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2744,"bootTime":1726849049,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 17:03:12.290454   48715 start.go:139] virtualization:  
	I0920 17:03:12.293271   48715 out.go:177] * [functional-108853] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0920 17:03:12.295837   48715 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:03:12.295905   48715 notify.go:220] Checking for updates...
	I0920 17:03:12.301116   48715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:03:12.303226   48715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig
	I0920 17:03:12.305141   48715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube
	I0920 17:03:12.307123   48715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 17:03:12.310985   48715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:03:12.313486   48715 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:03:12.314123   48715 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:03:12.363719   48715 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:03:12.363829   48715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:03:12.420860   48715 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 17:03:12.409248022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:03:12.420964   48715 docker.go:318] overlay module found
	I0920 17:03:12.423398   48715 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 17:03:12.425179   48715 start.go:297] selected driver: docker
	I0920 17:03:12.425196   48715 start.go:901] validating driver "docker" against &{Name:functional-108853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-108853 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:03:12.425304   48715 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:03:12.428059   48715 out.go:201] 
	W0920 17:03:12.429997   48715 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 17:03:12.432065   48715 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-108853 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-108853 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-rmmh4" [39c538af-5a3f-4efc-8560-1cb343891241] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-rmmh4" [39c538af-5a3f-4efc-8560-1cb343891241] Running
E0920 17:02:57.388555    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.00545171s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31310
functional_test.go:1675: http://192.168.49.2:31310: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-rmmh4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31310
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [41b602f8-dfab-472a-95fb-91aa6eb5fb2d] Running
E0920 17:02:42.024876    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00467163s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-108853 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-108853 apply -f testdata/storage-provisioner/pvc.yaml
E0920 17:02:47.146719    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-108853 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-108853 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e7d7bea4-0e14-4418-b9ea-34534e5b2ff8] Pending
helpers_test.go:344: "sp-pod" [e7d7bea4-0e14-4418-b9ea-34534e5b2ff8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e7d7bea4-0e14-4418-b9ea-34534e5b2ff8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003947205s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-108853 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-108853 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-108853 delete -f testdata/storage-provisioner/pod.yaml: (1.155706475s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-108853 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5b79d07a-beb5-4620-b1d5-44e4a73a0015] Pending
helpers_test.go:344: "sp-pod" [5b79d07a-beb5-4620-b1d5-44e4a73a0015] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5b79d07a-beb5-4620-b1d5-44e4a73a0015] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00345785s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-108853 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 cp testdata/cp-test.txt /home/docker/cp-test.txt
E0920 17:02:39.462465    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh -n functional-108853 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 cp functional-108853:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2834465881/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh -n functional-108853 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh -n functional-108853 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7542/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "sudo cat /etc/test/nested/copy/7542/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7542.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "sudo cat /etc/ssl/certs/7542.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7542.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "sudo cat /usr/share/ca-certificates/7542.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75422.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "sudo cat /etc/ssl/certs/75422.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75422.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "sudo cat /usr/share/ca-certificates/75422.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-108853 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-108853 ssh "sudo systemctl is-active crio": exit status 1 (324.99639ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-108853 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-108853 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-108853 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45949: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-108853 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-108853 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-108853 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6c20eda8-bfaa-4d74-bed0-9b12df964823] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6c20eda8-bfaa-4d74-bed0-9b12df964823] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004544194s
I0920 17:02:49.698244    7542 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-108853 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.84.204 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-108853 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-108853 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-108853 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-26vs5" [d8b535e2-4790-4e65-b8ea-4a317ee67446] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-26vs5" [d8b535e2-4790-4e65-b8ea-4a317ee67446] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004356761s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "400.736113ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "107.218869ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 service list -o json
functional_test.go:1494: Took "573.961895ms" to run "out/minikube-linux-arm64 -p functional-108853 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "443.724971ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "120.067608ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31916
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-108853 /tmp/TestFunctionalparallelMountCmdany-port2723816587/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726851789730417954" to /tmp/TestFunctionalparallelMountCmdany-port2723816587/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726851789730417954" to /tmp/TestFunctionalparallelMountCmdany-port2723816587/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726851789730417954" to /tmp/TestFunctionalparallelMountCmdany-port2723816587/001/test-1726851789730417954
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-108853 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (550.106611ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:03:10.280838    7542 retry.go:31] will retry after 714.211851ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 17:03 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 17:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 17:03 test-1726851789730417954
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh cat /mount-9p/test-1726851789730417954
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-108853 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a0d32cd5-147b-4bba-9949-59594fa67f1b] Pending
helpers_test.go:344: "busybox-mount" [a0d32cd5-147b-4bba-9949-59594fa67f1b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a0d32cd5-147b-4bba-9949-59594fa67f1b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0920 17:03:17.870116    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [a0d32cd5-147b-4bba-9949-59594fa67f1b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00424418s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-108853 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-108853 /tmp/TestFunctionalparallelMountCmdany-port2723816587/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31916
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-108853 /tmp/TestFunctionalparallelMountCmdspecific-port182501575/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-108853 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (525.360556ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:03:20.121041    7542 retry.go:31] will retry after 707.461576ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-108853 /tmp/TestFunctionalparallelMountCmdspecific-port182501575/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-108853 ssh "sudo umount -f /mount-9p": exit status 1 (319.444789ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-108853 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-108853 /tmp/TestFunctionalparallelMountCmdspecific-port182501575/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-108853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759602498/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-108853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759602498/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-108853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759602498/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-108853 ssh "findmnt -T" /mount1: exit status 1 (741.151774ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 17:03:22.725616    7542 retry.go:31] will retry after 533.708562ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "findmnt -T" /mount2
2024/09/20 17:03:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-108853 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-108853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759602498/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-108853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759602498/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-108853 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759602498/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-108853 version -o=json --components: (1.161593743s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-108853 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-108853
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-108853
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-108853 image ls --format short --alsologtostderr:
I0920 17:03:30.754021   51995 out.go:345] Setting OutFile to fd 1 ...
I0920 17:03:30.754755   51995 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:03:30.754796   51995 out.go:358] Setting ErrFile to fd 2...
I0920 17:03:30.754818   51995 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:03:30.755118   51995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
I0920 17:03:30.755980   51995 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:03:30.756174   51995 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:03:30.756793   51995 cli_runner.go:164] Run: docker container inspect functional-108853 --format={{.State.Status}}
I0920 17:03:30.776379   51995 ssh_runner.go:195] Run: systemctl --version
I0920 17:03:30.776430   51995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-108853
I0920 17:03:30.808640   51995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/functional-108853/id_rsa Username:docker}
I0920 17:03:30.912951   51995 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-108853 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-108853 | 41009f451bb60 | 30B    |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| docker.io/kicbase/echo-server               | functional-108853 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-108853 image ls --format table --alsologtostderr:
I0920 17:03:31.285712   52150 out.go:345] Setting OutFile to fd 1 ...
I0920 17:03:31.286175   52150 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:03:31.286207   52150 out.go:358] Setting ErrFile to fd 2...
I0920 17:03:31.286233   52150 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:03:31.286569   52150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
I0920 17:03:31.287251   52150 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:03:31.287419   52150 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:03:31.288089   52150 cli_runner.go:164] Run: docker container inspect functional-108853 --format={{.State.Status}}
I0920 17:03:31.315599   52150 ssh_runner.go:195] Run: systemctl --version
I0920 17:03:31.315642   52150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-108853
I0920 17:03:31.337743   52150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/functional-108853/id_rsa Username:docker}
I0920 17:03:31.435660   52150 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-108853 image ls --format json --alsologtostderr:
[{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d37
9f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"41009f451bb60d88e8c53602cb19621a267eeeb695156c98bc13ff76c6f48790","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-108853"],"size":"30"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","rep
oDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-108853"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"
size":"484000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-108853 image ls --format json --alsologtostderr:
I0920 17:03:31.021672   52064 out.go:345] Setting OutFile to fd 1 ...
I0920 17:03:31.021890   52064 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:03:31.021924   52064 out.go:358] Setting ErrFile to fd 2...
I0920 17:03:31.021950   52064 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:03:31.022250   52064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
I0920 17:03:31.023183   52064 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:03:31.023408   52064 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:03:31.024476   52064 cli_runner.go:164] Run: docker container inspect functional-108853 --format={{.State.Status}}
I0920 17:03:31.056816   52064 ssh_runner.go:195] Run: systemctl --version
I0920 17:03:31.056874   52064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-108853
I0920 17:03:31.087957   52064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/functional-108853/id_rsa Username:docker}
I0920 17:03:31.191066   52064 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-108853 image ls --format yaml --alsologtostderr:
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 41009f451bb60d88e8c53602cb19621a267eeeb695156c98bc13ff76c6f48790
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-108853
size: "30"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-108853
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-108853 image ls --format yaml --alsologtostderr:
I0920 17:03:30.721662   51996 out.go:345] Setting OutFile to fd 1 ...
I0920 17:03:30.721970   51996 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:03:30.721984   51996 out.go:358] Setting ErrFile to fd 2...
I0920 17:03:30.721991   51996 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:03:30.723097   51996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
I0920 17:03:30.724012   51996 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:03:30.724129   51996 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:03:30.724791   51996 cli_runner.go:164] Run: docker container inspect functional-108853 --format={{.State.Status}}
I0920 17:03:30.757721   51996 ssh_runner.go:195] Run: systemctl --version
I0920 17:03:30.757770   51996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-108853
I0920 17:03:30.781663   51996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/functional-108853/id_rsa Username:docker}
I0920 17:03:30.878676   51996 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-108853 ssh pgrep buildkitd: exit status 1 (342.658937ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image build -t localhost/my-image:functional-108853 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-108853 image build -t localhost/my-image:functional-108853 testdata/build --alsologtostderr: (2.752788053s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-108853 image build -t localhost/my-image:functional-108853 testdata/build --alsologtostderr:
I0920 17:03:31.312640   52156 out.go:345] Setting OutFile to fd 1 ...
I0920 17:03:31.312865   52156 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:03:31.312878   52156 out.go:358] Setting ErrFile to fd 2...
I0920 17:03:31.312884   52156 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 17:03:31.313160   52156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
I0920 17:03:31.314095   52156 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:03:31.315367   52156 config.go:182] Loaded profile config "functional-108853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 17:03:31.315858   52156 cli_runner.go:164] Run: docker container inspect functional-108853 --format={{.State.Status}}
I0920 17:03:31.340845   52156 ssh_runner.go:195] Run: systemctl --version
I0920 17:03:31.340898   52156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-108853
I0920 17:03:31.368026   52156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/functional-108853/id_rsa Username:docker}
I0920 17:03:31.467629   52156 build_images.go:161] Building image from path: /tmp/build.89577118.tar
I0920 17:03:31.467697   52156 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 17:03:31.478082   52156 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.89577118.tar
I0920 17:03:31.484252   52156 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.89577118.tar: stat -c "%s %y" /var/lib/minikube/build/build.89577118.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.89577118.tar': No such file or directory
I0920 17:03:31.484279   52156 ssh_runner.go:362] scp /tmp/build.89577118.tar --> /var/lib/minikube/build/build.89577118.tar (3072 bytes)
I0920 17:03:31.509928   52156 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.89577118
I0920 17:03:31.519107   52156 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.89577118 -xf /var/lib/minikube/build/build.89577118.tar
I0920 17:03:31.528551   52156 docker.go:360] Building image: /var/lib/minikube/build/build.89577118
I0920 17:03:31.528625   52156 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-108853 /var/lib/minikube/build/build.89577118
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:f39e308151184066843c57324cc3e656d50e79ba577f771c5b023051e487ab28 done
#8 naming to localhost/my-image:functional-108853 done
#8 DONE 0.1s
I0920 17:03:33.976142   52156 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-108853 /var/lib/minikube/build/build.89577118: (2.447490468s)
I0920 17:03:33.976218   52156 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.89577118
I0920 17:03:33.987600   52156 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.89577118.tar
I0920 17:03:33.996189   52156 build_images.go:217] Built localhost/my-image:functional-108853 from /tmp/build.89577118.tar
I0920 17:03:33.996223   52156 build_images.go:133] succeeded building to: functional-108853
I0920 17:03:33.996229   52156 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-108853
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image load --daemon kicbase/echo-server:functional-108853 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image load --daemon kicbase/echo-server:functional-108853 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-108853 docker-env) && out/minikube-linux-arm64 status -p functional-108853"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-108853 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-108853
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image load --daemon kicbase/echo-server:functional-108853 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image save kicbase/echo-server:functional-108853 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image rm kicbase/echo-server:functional-108853 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-108853
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-108853 image save --daemon kicbase/echo-server:functional-108853 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-108853
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-108853
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-108853
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-108853
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (124.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-123840 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 17:03:58.832542    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:05:20.760671    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-123840 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m3.298534841s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (124.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-123840 -- rollout status deployment/busybox: (4.993150946s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-k8c44 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-k8xcp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-r5px8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-k8c44 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-k8xcp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-r5px8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-k8c44 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-k8xcp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-r5px8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-k8c44 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-k8c44 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-k8xcp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-k8xcp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-r5px8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-123840 -- exec busybox-7dff88458-r5px8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-123840 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-123840 -v=7 --alsologtostderr: (26.950793426s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr: (1.292275356s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-123840 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.203245952s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-123840 status --output json -v=7 --alsologtostderr: (1.004865618s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp testdata/cp-test.txt ha-123840:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4072724404/001/cp-test_ha-123840.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840:/home/docker/cp-test.txt ha-123840-m02:/home/docker/cp-test_ha-123840_ha-123840-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m02 "sudo cat /home/docker/cp-test_ha-123840_ha-123840-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840:/home/docker/cp-test.txt ha-123840-m03:/home/docker/cp-test_ha-123840_ha-123840-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m03 "sudo cat /home/docker/cp-test_ha-123840_ha-123840-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840:/home/docker/cp-test.txt ha-123840-m04:/home/docker/cp-test_ha-123840_ha-123840-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m04 "sudo cat /home/docker/cp-test_ha-123840_ha-123840-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp testdata/cp-test.txt ha-123840-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4072724404/001/cp-test_ha-123840-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m02:/home/docker/cp-test.txt ha-123840:/home/docker/cp-test_ha-123840-m02_ha-123840.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840 "sudo cat /home/docker/cp-test_ha-123840-m02_ha-123840.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m02:/home/docker/cp-test.txt ha-123840-m03:/home/docker/cp-test_ha-123840-m02_ha-123840-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m03 "sudo cat /home/docker/cp-test_ha-123840-m02_ha-123840-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m02:/home/docker/cp-test.txt ha-123840-m04:/home/docker/cp-test_ha-123840-m02_ha-123840-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m04 "sudo cat /home/docker/cp-test_ha-123840-m02_ha-123840-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp testdata/cp-test.txt ha-123840-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4072724404/001/cp-test_ha-123840-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m03:/home/docker/cp-test.txt ha-123840:/home/docker/cp-test_ha-123840-m03_ha-123840.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840 "sudo cat /home/docker/cp-test_ha-123840-m03_ha-123840.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m03:/home/docker/cp-test.txt ha-123840-m02:/home/docker/cp-test_ha-123840-m03_ha-123840-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m02 "sudo cat /home/docker/cp-test_ha-123840-m03_ha-123840-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m03:/home/docker/cp-test.txt ha-123840-m04:/home/docker/cp-test_ha-123840-m03_ha-123840-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m04 "sudo cat /home/docker/cp-test_ha-123840-m03_ha-123840-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp testdata/cp-test.txt ha-123840-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4072724404/001/cp-test_ha-123840-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m04:/home/docker/cp-test.txt ha-123840:/home/docker/cp-test_ha-123840-m04_ha-123840.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840 "sudo cat /home/docker/cp-test_ha-123840-m04_ha-123840.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m04:/home/docker/cp-test.txt ha-123840-m02:/home/docker/cp-test_ha-123840-m04_ha-123840-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m02 "sudo cat /home/docker/cp-test_ha-123840-m04_ha-123840-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 cp ha-123840-m04:/home/docker/cp-test.txt ha-123840-m03:/home/docker/cp-test_ha-123840-m04_ha-123840-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 ssh -n ha-123840-m03 "sudo cat /home/docker/cp-test_ha-123840-m04_ha-123840-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-123840 node stop m02 -v=7 --alsologtostderr: (10.960732367s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr: exit status 7 (771.033492ms)

                                                
                                                
-- stdout --
	ha-123840
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-123840-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-123840-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-123840-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:06:50.619484   74428 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:06:50.619701   74428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:06:50.619728   74428 out.go:358] Setting ErrFile to fd 2...
	I0920 17:06:50.619747   74428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:06:50.620017   74428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
	I0920 17:06:50.620237   74428 out.go:352] Setting JSON to false
	I0920 17:06:50.620318   74428 mustload.go:65] Loading cluster: ha-123840
	I0920 17:06:50.620407   74428 notify.go:220] Checking for updates...
	I0920 17:06:50.620811   74428 config.go:182] Loaded profile config "ha-123840": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:06:50.620848   74428 status.go:174] checking status of ha-123840 ...
	I0920 17:06:50.621745   74428 cli_runner.go:164] Run: docker container inspect ha-123840 --format={{.State.Status}}
	I0920 17:06:50.643404   74428 status.go:364] ha-123840 host status = "Running" (err=<nil>)
	I0920 17:06:50.643423   74428 host.go:66] Checking if "ha-123840" exists ...
	I0920 17:06:50.643729   74428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-123840
	I0920 17:06:50.684815   74428 host.go:66] Checking if "ha-123840" exists ...
	I0920 17:06:50.685124   74428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:06:50.685182   74428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-123840
	I0920 17:06:50.705894   74428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/ha-123840/id_rsa Username:docker}
	I0920 17:06:50.799934   74428 ssh_runner.go:195] Run: systemctl --version
	I0920 17:06:50.804626   74428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:06:50.816943   74428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:06:50.891506   74428 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-20 17:06:50.877648434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:06:50.892093   74428 kubeconfig.go:125] found "ha-123840" server: "https://192.168.49.254:8443"
	I0920 17:06:50.892121   74428 api_server.go:166] Checking apiserver status ...
	I0920 17:06:50.892166   74428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:06:50.907240   74428 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2277/cgroup
	I0920 17:06:50.919134   74428 api_server.go:182] apiserver freezer: "5:freezer:/docker/46fbe72ef66fd74e3cb7f17a887c9aa7561efac84d998222453cc17c61f879df/kubepods/burstable/pod2e0eebce1b5effecf782e6762429d642/a157d8ad9bff951855e71514cc912a8834045674d9614adafcfdcc532ec4d78f"
	I0920 17:06:50.919214   74428 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/46fbe72ef66fd74e3cb7f17a887c9aa7561efac84d998222453cc17c61f879df/kubepods/burstable/pod2e0eebce1b5effecf782e6762429d642/a157d8ad9bff951855e71514cc912a8834045674d9614adafcfdcc532ec4d78f/freezer.state
	I0920 17:06:50.928822   74428 api_server.go:204] freezer state: "THAWED"
	I0920 17:06:50.928854   74428 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 17:06:50.936681   74428 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 17:06:50.936711   74428 status.go:456] ha-123840 apiserver status = Running (err=<nil>)
	I0920 17:06:50.936722   74428 status.go:176] ha-123840 status: &{Name:ha-123840 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:06:50.936739   74428 status.go:174] checking status of ha-123840-m02 ...
	I0920 17:06:50.937052   74428 cli_runner.go:164] Run: docker container inspect ha-123840-m02 --format={{.State.Status}}
	I0920 17:06:50.957926   74428 status.go:364] ha-123840-m02 host status = "Stopped" (err=<nil>)
	I0920 17:06:50.957950   74428 status.go:377] host is not running, skipping remaining checks
	I0920 17:06:50.957958   74428 status.go:176] ha-123840-m02 status: &{Name:ha-123840-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:06:50.957978   74428 status.go:174] checking status of ha-123840-m03 ...
	I0920 17:06:50.958380   74428 cli_runner.go:164] Run: docker container inspect ha-123840-m03 --format={{.State.Status}}
	I0920 17:06:50.976784   74428 status.go:364] ha-123840-m03 host status = "Running" (err=<nil>)
	I0920 17:06:50.976806   74428 host.go:66] Checking if "ha-123840-m03" exists ...
	I0920 17:06:50.977113   74428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-123840-m03
	I0920 17:06:50.994763   74428 host.go:66] Checking if "ha-123840-m03" exists ...
	I0920 17:06:50.995152   74428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:06:50.995216   74428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-123840-m03
	I0920 17:06:51.012910   74428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/ha-123840-m03/id_rsa Username:docker}
	I0920 17:06:51.111959   74428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:06:51.138681   74428 kubeconfig.go:125] found "ha-123840" server: "https://192.168.49.254:8443"
	I0920 17:06:51.138717   74428 api_server.go:166] Checking apiserver status ...
	I0920 17:06:51.138764   74428 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:06:51.152425   74428 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2107/cgroup
	I0920 17:06:51.163222   74428 api_server.go:182] apiserver freezer: "5:freezer:/docker/a0681abeaa9bdc6a2c7b606d09a578a7f858f8d9e3d0291449341b38f1fb7d49/kubepods/burstable/pod05a9995ea3baf3748c1d16b423c1e0cf/56db1540dec030535c515be94bcfe96f3828aae6362ee737885179078f2226e7"
	I0920 17:06:51.163313   74428 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a0681abeaa9bdc6a2c7b606d09a578a7f858f8d9e3d0291449341b38f1fb7d49/kubepods/burstable/pod05a9995ea3baf3748c1d16b423c1e0cf/56db1540dec030535c515be94bcfe96f3828aae6362ee737885179078f2226e7/freezer.state
	I0920 17:06:51.172980   74428 api_server.go:204] freezer state: "THAWED"
	I0920 17:06:51.173011   74428 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 17:06:51.180874   74428 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 17:06:51.180902   74428 status.go:456] ha-123840-m03 apiserver status = Running (err=<nil>)
	I0920 17:06:51.180912   74428 status.go:176] ha-123840-m03 status: &{Name:ha-123840-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:06:51.180929   74428 status.go:174] checking status of ha-123840-m04 ...
	I0920 17:06:51.181235   74428 cli_runner.go:164] Run: docker container inspect ha-123840-m04 --format={{.State.Status}}
	I0920 17:06:51.198472   74428 status.go:364] ha-123840-m04 host status = "Running" (err=<nil>)
	I0920 17:06:51.198498   74428 host.go:66] Checking if "ha-123840-m04" exists ...
	I0920 17:06:51.198917   74428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-123840-m04
	I0920 17:06:51.215309   74428 host.go:66] Checking if "ha-123840-m04" exists ...
	I0920 17:06:51.215622   74428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:06:51.215669   74428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-123840-m04
	I0920 17:06:51.233575   74428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/ha-123840-m04/id_rsa Username:docker}
	I0920 17:06:51.327926   74428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:06:51.340304   74428 status.go:176] ha-123840-m04 status: &{Name:ha-123840-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (123.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 node start m02 -v=7 --alsologtostderr
E0920 17:07:36.883649    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:41.225036    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:41.231579    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:41.243064    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:41.264454    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:41.306476    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:41.387927    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:41.549746    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:41.871561    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:42.513080    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:43.795033    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:46.356616    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:07:51.478151    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:08:01.719522    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:08:04.602362    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:08:22.200857    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-123840 node start m02 -v=7 --alsologtostderr: (2m2.13870256s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (123.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (174.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-123840 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-123840 -v=7 --alsologtostderr
E0920 17:09:03.162476    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-123840 -v=7 --alsologtostderr: (34.241388294s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-123840 --wait=true -v=7 --alsologtostderr
E0920 17:10:25.084604    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-123840 --wait=true -v=7 --alsologtostderr: (2m19.729662917s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-123840
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (174.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-123840 node delete m03 -v=7 --alsologtostderr: (10.922791039s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-123840 stop -v=7 --alsologtostderr: (32.709800553s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr: exit status 7 (109.842011ms)

                                                
                                                
-- stdout --
	ha-123840
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-123840-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-123840-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:12:35.876402  102023 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:12:35.876534  102023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:12:35.876545  102023 out.go:358] Setting ErrFile to fd 2...
	I0920 17:12:35.876550  102023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:12:35.876804  102023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
	I0920 17:12:35.876975  102023 out.go:352] Setting JSON to false
	I0920 17:12:35.877026  102023 mustload.go:65] Loading cluster: ha-123840
	I0920 17:12:35.877121  102023 notify.go:220] Checking for updates...
	I0920 17:12:35.877493  102023 config.go:182] Loaded profile config "ha-123840": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:12:35.877507  102023 status.go:174] checking status of ha-123840 ...
	I0920 17:12:35.878352  102023 cli_runner.go:164] Run: docker container inspect ha-123840 --format={{.State.Status}}
	I0920 17:12:35.897066  102023 status.go:364] ha-123840 host status = "Stopped" (err=<nil>)
	I0920 17:12:35.897091  102023 status.go:377] host is not running, skipping remaining checks
	I0920 17:12:35.897098  102023 status.go:176] ha-123840 status: &{Name:ha-123840 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:12:35.897129  102023 status.go:174] checking status of ha-123840-m02 ...
	I0920 17:12:35.897475  102023 cli_runner.go:164] Run: docker container inspect ha-123840-m02 --format={{.State.Status}}
	I0920 17:12:35.919839  102023 status.go:364] ha-123840-m02 host status = "Stopped" (err=<nil>)
	I0920 17:12:35.919862  102023 status.go:377] host is not running, skipping remaining checks
	I0920 17:12:35.919870  102023 status.go:176] ha-123840-m02 status: &{Name:ha-123840-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:12:35.919888  102023 status.go:174] checking status of ha-123840-m04 ...
	I0920 17:12:35.920194  102023 cli_runner.go:164] Run: docker container inspect ha-123840-m04 --format={{.State.Status}}
	I0920 17:12:35.936911  102023 status.go:364] ha-123840-m04 host status = "Stopped" (err=<nil>)
	I0920 17:12:35.936935  102023 status.go:377] host is not running, skipping remaining checks
	I0920 17:12:35.936942  102023 status.go:176] ha-123840-m04 status: &{Name:ha-123840-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-123840 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 17:12:36.884005    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:12:41.224589    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:13:08.926535    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-123840 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m7.784429497s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.099449182s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (50.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-123840 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-123840 --control-plane -v=7 --alsologtostderr: (49.478877818s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-123840 status -v=7 --alsologtostderr: (1.051228976s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (50.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.112241041s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-349633 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-349633 --driver=docker  --container-runtime=docker: (31.393689958s)
--- PASS: TestImageBuild/serial/Setup (31.39s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-349633
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-349633: (1.952737554s)
--- PASS: TestImageBuild/serial/NormalBuild (1.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-349633
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-349633
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-349633
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-120749 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-120749 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m15.724202632s)
--- PASS: TestJSONOutput/start/Command (75.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-120749 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-120749 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-120749 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-120749 --output=json --user=testUser: (10.917195042s)
--- PASS: TestJSONOutput/stop/Command (10.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-368512 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-368512 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.063562ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d3c79c8e-a710-4db7-b205-78f05d685a69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-368512] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"95e18878-0440-4bc9-b1a8-91c4aa6d0721","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"e911314c-a86c-4fbf-b43a-07db6214977b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"59ada43b-bd26-403c-8083-b847c141f673","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig"}}
	{"specversion":"1.0","id":"2f0e394d-800c-473a-8c01-a29e8ce71942","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube"}}
	{"specversion":"1.0","id":"ec81c44d-1d01-4b31-8f4f-e9f4eb321695","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3e8a7192-c3f5-4840-8b06-45aea6885367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"30534e8d-b819-43b9-b19d-39192991aab5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-368512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-368512
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-099692 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-099692 --network=: (36.505670705s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-099692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-099692
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-099692: (2.076700521s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.61s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-307065 --network=bridge
E0920 17:17:36.883516    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:17:41.226468    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-307065 --network=bridge: (32.73103986s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-307065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-307065
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-307065: (1.976216717s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.73s)

                                                
                                    
x
+
TestKicExistingNetwork (35.23s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 17:18:08.217600    7542 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 17:18:08.231773    7542 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 17:18:08.231860    7542 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 17:18:08.231883    7542 cli_runner.go:164] Run: docker network inspect existing-network
W0920 17:18:08.252911    7542 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 17:18:08.252940    7542 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 17:18:08.252956    7542 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 17:18:08.253054    7542 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 17:18:08.270517    7542 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-21730d2166d5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c3:50:91:d8} reservation:<nil>}
I0920 17:18:08.270840    7542 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001adb330}
I0920 17:18:08.270867    7542 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 17:18:08.270922    7542 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 17:18:08.340026    7542 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-054621 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-054621 --network=existing-network: (33.125812876s)
helpers_test.go:175: Cleaning up "existing-network-054621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-054621
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-054621: (1.945198052s)
I0920 17:18:43.427763    7542 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.23s)

                                                
                                    
x
+
TestKicCustomSubnet (34.88s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-144933 --subnet=192.168.60.0/24
E0920 17:18:59.963711    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-144933 --subnet=192.168.60.0/24: (32.861076539s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-144933 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-144933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-144933
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-144933: (2.001004291s)
--- PASS: TestKicCustomSubnet (34.88s)

                                                
                                    
x
+
TestKicStaticIP (33.07s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-566391 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-566391 --static-ip=192.168.200.200: (30.796038245s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-566391 ip
helpers_test.go:175: Cleaning up "static-ip-566391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-566391
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-566391: (2.139750102s)
--- PASS: TestKicStaticIP (33.07s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.51s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-907049 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-907049 --driver=docker  --container-runtime=docker: (31.931697164s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-909552 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-909552 --driver=docker  --container-runtime=docker: (36.533023472s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-907049
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-909552
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-909552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-909552
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-909552: (2.109350724s)
helpers_test.go:175: Cleaning up "first-907049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-907049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-907049: (2.127933261s)
--- PASS: TestMinikubeProfile (74.51s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-580778 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-580778 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.933291356s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-580778 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-582780 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-582780 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.547085764s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-582780 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.52s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-580778 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-580778 --alsologtostderr -v=5: (1.522417797s)
--- PASS: TestMountStart/serial/DeleteFirst (1.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-582780 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-582780
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-582780: (1.194093719s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-582780
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-582780: (7.626049676s)
--- PASS: TestMountStart/serial/RestartStopped (8.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-582780 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-979047 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 17:22:36.883290    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:22:41.225483    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-979047 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.26090341s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (52.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-979047 -- rollout status deployment/busybox: (4.11025845s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:23:06.907591    7542 retry.go:31] will retry after 1.083173391s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:23:08.155646    7542 retry.go:31] will retry after 1.961705229s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:23:10.273893    7542 retry.go:31] will retry after 1.86382136s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:23:12.291381    7542 retry.go:31] will retry after 2.105078538s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:23:14.538957    7542 retry.go:31] will retry after 5.60477051s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:23:20.285616    7542 retry.go:31] will retry after 6.546139163s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:23:26.974743    7542 retry.go:31] will retry after 7.661941718s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 17:23:34.784024    7542 retry.go:31] will retry after 18.203682262s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-7dff88458-rttz7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-7dff88458-tj7fq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-7dff88458-rttz7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-7dff88458-tj7fq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-7dff88458-rttz7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-7dff88458-tj7fq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (52.38s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-7dff88458-rttz7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-7dff88458-rttz7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-7dff88458-tj7fq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-979047 -- exec busybox-7dff88458-tj7fq -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-979047 -v 3 --alsologtostderr
E0920 17:24:04.288581    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-979047 -v 3 --alsologtostderr: (18.052776039s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.89s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-979047 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp testdata/cp-test.txt multinode-979047:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3126785112/001/cp-test_multinode-979047.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047:/home/docker/cp-test.txt multinode-979047-m02:/home/docker/cp-test_multinode-979047_multinode-979047-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test_multinode-979047_multinode-979047-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047:/home/docker/cp-test.txt multinode-979047-m03:/home/docker/cp-test_multinode-979047_multinode-979047-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test_multinode-979047_multinode-979047-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp testdata/cp-test.txt multinode-979047-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3126785112/001/cp-test_multinode-979047-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m02:/home/docker/cp-test.txt multinode-979047:/home/docker/cp-test_multinode-979047-m02_multinode-979047.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test_multinode-979047-m02_multinode-979047.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m02:/home/docker/cp-test.txt multinode-979047-m03:/home/docker/cp-test_multinode-979047-m02_multinode-979047-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test_multinode-979047-m02_multinode-979047-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp testdata/cp-test.txt multinode-979047-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3126785112/001/cp-test_multinode-979047-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m03:/home/docker/cp-test.txt multinode-979047:/home/docker/cp-test_multinode-979047-m03_multinode-979047.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047 "sudo cat /home/docker/cp-test_multinode-979047-m03_multinode-979047.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 cp multinode-979047-m03:/home/docker/cp-test.txt multinode-979047-m02:/home/docker/cp-test_multinode-979047-m03_multinode-979047-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 ssh -n multinode-979047-m02 "sudo cat /home/docker/cp-test_multinode-979047-m03_multinode-979047-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-979047 node stop m03: (1.20975635s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-979047 status: exit status 7 (516.327091ms)

                                                
                                                
-- stdout --
	multinode-979047
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-979047-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-979047-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr: exit status 7 (543.897629ms)

                                                
                                                
-- stdout --
	multinode-979047
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-979047-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-979047-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:24:27.344549  176556 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:24:27.344718  176556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:24:27.344723  176556 out.go:358] Setting ErrFile to fd 2...
	I0920 17:24:27.344728  176556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:24:27.344982  176556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
	I0920 17:24:27.345160  176556 out.go:352] Setting JSON to false
	I0920 17:24:27.345196  176556 mustload.go:65] Loading cluster: multinode-979047
	I0920 17:24:27.345323  176556 notify.go:220] Checking for updates...
	I0920 17:24:27.345626  176556 config.go:182] Loaded profile config "multinode-979047": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:24:27.345647  176556 status.go:174] checking status of multinode-979047 ...
	I0920 17:24:27.346555  176556 cli_runner.go:164] Run: docker container inspect multinode-979047 --format={{.State.Status}}
	I0920 17:24:27.363785  176556 status.go:364] multinode-979047 host status = "Running" (err=<nil>)
	I0920 17:24:27.363813  176556 host.go:66] Checking if "multinode-979047" exists ...
	I0920 17:24:27.364133  176556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979047
	I0920 17:24:27.398266  176556 host.go:66] Checking if "multinode-979047" exists ...
	I0920 17:24:27.398620  176556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:24:27.398677  176556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047
	I0920 17:24:27.428983  176556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/multinode-979047/id_rsa Username:docker}
	I0920 17:24:27.527505  176556 ssh_runner.go:195] Run: systemctl --version
	I0920 17:24:27.531936  176556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:24:27.543744  176556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:24:27.599527  176556 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-20 17:24:27.589996938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6]] Warnings:<nil>}}
	I0920 17:24:27.600106  176556 kubeconfig.go:125] found "multinode-979047" server: "https://192.168.67.2:8443"
	I0920 17:24:27.600150  176556 api_server.go:166] Checking apiserver status ...
	I0920 17:24:27.600198  176556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:24:27.611726  176556 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2329/cgroup
	I0920 17:24:27.621003  176556 api_server.go:182] apiserver freezer: "5:freezer:/docker/3689552cde653f779fa41c9882b62bab343fccd3d74bd6af0ab5e47fbc715864/kubepods/burstable/pod3091212613f76e3790201817393e2555/5b6891bc3590ad2ef991f63ae04e63e44082f7536db7eb67155fa0e55d2d1089"
	I0920 17:24:27.621092  176556 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3689552cde653f779fa41c9882b62bab343fccd3d74bd6af0ab5e47fbc715864/kubepods/burstable/pod3091212613f76e3790201817393e2555/5b6891bc3590ad2ef991f63ae04e63e44082f7536db7eb67155fa0e55d2d1089/freezer.state
	I0920 17:24:27.630012  176556 api_server.go:204] freezer state: "THAWED"
	I0920 17:24:27.630043  176556 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 17:24:27.637709  176556 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 17:24:27.637741  176556 status.go:456] multinode-979047 apiserver status = Running (err=<nil>)
	I0920 17:24:27.637753  176556 status.go:176] multinode-979047 status: &{Name:multinode-979047 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:24:27.637769  176556 status.go:174] checking status of multinode-979047-m02 ...
	I0920 17:24:27.638083  176556 cli_runner.go:164] Run: docker container inspect multinode-979047-m02 --format={{.State.Status}}
	I0920 17:24:27.654525  176556 status.go:364] multinode-979047-m02 host status = "Running" (err=<nil>)
	I0920 17:24:27.654549  176556 host.go:66] Checking if "multinode-979047-m02" exists ...
	I0920 17:24:27.654850  176556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979047-m02
	I0920 17:24:27.681356  176556 host.go:66] Checking if "multinode-979047-m02" exists ...
	I0920 17:24:27.681973  176556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:24:27.682049  176556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979047-m02
	I0920 17:24:27.701826  176556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19672-2235/.minikube/machines/multinode-979047-m02/id_rsa Username:docker}
	I0920 17:24:27.799206  176556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:24:27.810586  176556 status.go:176] multinode-979047-m02 status: &{Name:multinode-979047-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:24:27.810628  176556 status.go:174] checking status of multinode-979047-m03 ...
	I0920 17:24:27.810979  176556 cli_runner.go:164] Run: docker container inspect multinode-979047-m03 --format={{.State.Status}}
	I0920 17:24:27.828425  176556 status.go:364] multinode-979047-m03 host status = "Stopped" (err=<nil>)
	I0920 17:24:27.828459  176556 status.go:377] host is not running, skipping remaining checks
	I0920 17:24:27.828468  176556 status.go:176] multinode-979047-m03 status: &{Name:multinode-979047-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-979047 node start m03 -v=7 --alsologtostderr: (10.638142051s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-979047
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-979047
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-979047: (22.554518913s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-979047 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-979047 --wait=true -v=8 --alsologtostderr: (1m16.194388522s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-979047
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-979047 node delete m03: (4.99737832s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-979047 stop: (21.654037597s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-979047 status: exit status 7 (100.153194ms)

                                                
                                                
-- stdout --
	multinode-979047
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-979047-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr: exit status 7 (94.12721ms)

                                                
                                                
-- stdout --
	multinode-979047
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-979047-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:26:45.612863  190195 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:26:45.613053  190195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:26:45.613075  190195 out.go:358] Setting ErrFile to fd 2...
	I0920 17:26:45.613103  190195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:26:45.613372  190195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-2235/.minikube/bin
	I0920 17:26:45.613589  190195 out.go:352] Setting JSON to false
	I0920 17:26:45.613660  190195 mustload.go:65] Loading cluster: multinode-979047
	I0920 17:26:45.613702  190195 notify.go:220] Checking for updates...
	I0920 17:26:45.614132  190195 config.go:182] Loaded profile config "multinode-979047": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:26:45.614182  190195 status.go:174] checking status of multinode-979047 ...
	I0920 17:26:45.614803  190195 cli_runner.go:164] Run: docker container inspect multinode-979047 --format={{.State.Status}}
	I0920 17:26:45.633641  190195 status.go:364] multinode-979047 host status = "Stopped" (err=<nil>)
	I0920 17:26:45.633661  190195 status.go:377] host is not running, skipping remaining checks
	I0920 17:26:45.633667  190195 status.go:176] multinode-979047 status: &{Name:multinode-979047 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 17:26:45.633693  190195 status.go:174] checking status of multinode-979047-m02 ...
	I0920 17:26:45.633990  190195 cli_runner.go:164] Run: docker container inspect multinode-979047-m02 --format={{.State.Status}}
	I0920 17:26:45.658259  190195 status.go:364] multinode-979047-m02 host status = "Stopped" (err=<nil>)
	I0920 17:26:45.658280  190195 status.go:377] host is not running, skipping remaining checks
	I0920 17:26:45.658286  190195 status.go:176] multinode-979047-m02 status: &{Name:multinode-979047-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-979047 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 17:27:36.883372    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:27:41.224713    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-979047 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (58.005999855s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-979047 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-979047
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-979047-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-979047-m02 --driver=docker  --container-runtime=docker: exit status 14 (90.811585ms)

                                                
                                                
-- stdout --
	* [multinode-979047-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-979047-m02' is duplicated with machine name 'multinode-979047-m02' in profile 'multinode-979047'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-979047-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-979047-m03 --driver=docker  --container-runtime=docker: (36.069726648s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-979047
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-979047: exit status 80 (323.971395ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-979047 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-979047-m03 already exists in multinode-979047-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-979047-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-979047-m03: (1.985378928s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.51s)

                                                
                                    
x
+
TestPreload (142.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-365570 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-365570 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m42.037600143s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-365570 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-365570 image pull gcr.io/k8s-minikube/busybox: (1.970748653s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-365570
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-365570: (10.902150636s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-365570 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-365570 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (25.232072919s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-365570 image list
helpers_test.go:175: Cleaning up "test-preload-365570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-365570
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-365570: (2.269341995s)
--- PASS: TestPreload (142.72s)

                                                
                                    
x
+
TestScheduledStopUnix (105.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-780548 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-780548 --memory=2048 --driver=docker  --container-runtime=docker: (31.96445467s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-780548 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-780548 -n scheduled-stop-780548
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-780548 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 17:31:22.095387    7542 retry.go:31] will retry after 51.355µs: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.098466    7542 retry.go:31] will retry after 137.012µs: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.099625    7542 retry.go:31] will retry after 193.438µs: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.100786    7542 retry.go:31] will retry after 227.027µs: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.101934    7542 retry.go:31] will retry after 576.627µs: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.103083    7542 retry.go:31] will retry after 472.921µs: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.104226    7542 retry.go:31] will retry after 1.523765ms: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.106417    7542 retry.go:31] will retry after 1.311285ms: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.108628    7542 retry.go:31] will retry after 3.778092ms: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.112864    7542 retry.go:31] will retry after 4.446972ms: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.118100    7542 retry.go:31] will retry after 7.82409ms: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.126030    7542 retry.go:31] will retry after 9.18437ms: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.136303    7542 retry.go:31] will retry after 7.02001ms: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.143467    7542 retry.go:31] will retry after 22.467582ms: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
I0920 17:31:22.166757    7542 retry.go:31] will retry after 36.071165ms: open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/scheduled-stop-780548/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-780548 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-780548 -n scheduled-stop-780548
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-780548
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-780548 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-780548
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-780548: exit status 7 (78.602834ms)

                                                
                                                
-- stdout --
	scheduled-stop-780548
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-780548 -n scheduled-stop-780548
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-780548 -n scheduled-stop-780548: exit status 7 (71.082448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-780548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-780548
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-780548: (1.650579628s)
--- PASS: TestScheduledStopUnix (105.14s)

                                                
                                    
x
+
TestSkaffold (120.92s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2222582070 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-349114 --memory=2600 --driver=docker  --container-runtime=docker
E0920 17:32:36.883409    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:32:41.226494    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-349114 --memory=2600 --driver=docker  --container-runtime=docker: (33.391798414s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2222582070 run --minikube-profile skaffold-349114 --kube-context skaffold-349114 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2222582070 run --minikube-profile skaffold-349114 --kube-context skaffold-349114 --status-check=true --port-forward=false --interactive=false: (1m11.110353099s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-b579f8f9c-7rtnb" [3ff687f8-a8be-4044-a731-10337ff67be3] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004413104s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-8597bb5bfc-7q6mt" [4cc08e22-b5e8-4a49-9a4d-34d44387515d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 6.004264475s
helpers_test.go:175: Cleaning up "skaffold-349114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-349114
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-349114: (2.950543374s)
--- PASS: TestSkaffold (120.92s)

                                                
                                    
x
+
TestInsufficientStorage (11.32s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-850578 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-850578 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.943784687s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e7852e99-852f-4d6a-b203-d89ddbb9299a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-850578] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb209dfb-afbf-46a9-9516-263ef9e85ff1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"210004a9-f68b-4361-82f1-c0e79148c26b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ae5f6620-1510-47a5-a55e-231fbdd3f472","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig"}}
	{"specversion":"1.0","id":"3b393d18-8b24-4a6d-ad5c-81d092e367d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube"}}
	{"specversion":"1.0","id":"6841e466-ccb9-4594-b060-cb2c54e4a6b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0c4a472b-b666-41c6-b92f-d5001ef3a617","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6a62175b-41c5-46fe-bba7-d16bdbaa28d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"852bb0f7-ca17-4f60-a533-0dd0e50a44fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2d738c7f-68e2-4d4f-a98f-d352752454b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"af829f29-0ccc-413e-8938-c860f2ab2602","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e8562530-2574-4b8c-b7d3-bc6e5b9e8f4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-850578\" primary control-plane node in \"insufficient-storage-850578\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd38c9f5-3acf-4daa-9899-1f78f3c063c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f834e0ff-efa4-4e77-9333-80d4f87cc11c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"07577ced-c7d1-4f9a-81d5-adcbc82e0357","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-850578 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-850578 --output=json --layout=cluster: exit status 7 (279.935871ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-850578","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-850578","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 17:34:44.896094  224584 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-850578" does not appear in /home/jenkins/minikube-integration/19672-2235/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-850578 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-850578 --output=json --layout=cluster: exit status 7 (396.022737ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-850578","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-850578","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 17:34:45.285929  224645 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-850578" does not appear in /home/jenkins/minikube-integration/19672-2235/kubeconfig
	E0920 17:34:45.302693  224645 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/insufficient-storage-850578/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-850578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-850578
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-850578: (1.694729734s)
--- PASS: TestInsufficientStorage (11.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0920 17:42:41.226388    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1629952736 start -p running-upgrade-668445 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1629952736 start -p running-upgrade-668445 --memory=2200 --vm-driver=docker  --container-runtime=docker: (38.812292714s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-668445 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-668445 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.080849935s)
helpers_test.go:175: Cleaning up "running-upgrade-668445" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-668445
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-668445: (2.501191014s)
--- PASS: TestRunningBinaryUpgrade (88.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (380.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-722860 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-722860 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.954129225s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-722860
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-722860: (11.095652558s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-722860 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-722860 status --format={{.Host}}: exit status 7 (92.708303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-722860 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 17:42:36.884074    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-722860 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m42.456383879s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-722860 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-722860 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-722860 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (115.320681ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-722860] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-722860
	    minikube start -p kubernetes-upgrade-722860 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7228602 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-722860 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-722860 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-722860 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.405747895s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-722860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-722860
E0920 17:47:36.883148    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-722860: (2.684709304s)
--- PASS: TestKubernetesUpgrade (380.92s)

                                                
                                    
x
+
TestMissingContainerUpgrade (123.81s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2643535485 start -p missing-upgrade-316399 --memory=2200 --driver=docker  --container-runtime=docker
E0920 17:40:42.651587    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:40:44.290801    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2643535485 start -p missing-upgrade-316399 --memory=2200 --driver=docker  --container-runtime=docker: (46.323776013s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-316399
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-316399: (10.404313618s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-316399
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-316399 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 17:42:04.572991    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-316399 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.901787633s)
helpers_test.go:175: Cleaning up "missing-upgrade-316399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-316399
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-316399: (2.302466774s)
--- PASS: TestMissingContainerUpgrade (123.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858688 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-858688 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (99.718895ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-858688] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-2235/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-2235/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858688 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858688 --driver=docker  --container-runtime=docker: (45.724938293s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-858688 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858688 --no-kubernetes --driver=docker  --container-runtime=docker
E0920 17:35:39.965126    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858688 --no-kubernetes --driver=docker  --container-runtime=docker: (16.71482531s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-858688 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-858688 status -o json: exit status 2 (337.351207ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-858688","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-858688
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-858688: (1.785291574s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858688 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858688 --no-kubernetes --driver=docker  --container-runtime=docker: (7.111245383s)
--- PASS: TestNoKubernetes/serial/Start (7.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-858688 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-858688 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.403055ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-858688
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-858688: (1.245071801s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-858688 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-858688 --driver=docker  --container-runtime=docker: (7.420062487s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-858688 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-858688 "sudo systemctl is-active --quiet service kubelet": exit status 1 (297.792541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (134.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.666833363 start -p stopped-upgrade-789153 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0920 17:39:20.712745    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:20.719124    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:20.730485    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:20.751940    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:20.793437    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:20.874862    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:21.036610    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:21.358404    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:22.000689    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:23.282427    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:25.843747    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:30.965236    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:39:41.206633    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.666833363 start -p stopped-upgrade-789153 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m34.447498489s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.666833363 -p stopped-upgrade-789153 stop
E0920 17:40:01.690148    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.666833363 -p stopped-upgrade-789153 stop: (10.795758666s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-789153 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-789153 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.594063876s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (134.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-789153
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-789153: (1.349916175s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.35s)

                                                
                                    
x
+
TestPause/serial/Start (48.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-468765 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0920 17:44:20.711642    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:44:48.414673    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-468765 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (48.33757402s)
--- PASS: TestPause/serial/Start (48.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.75s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-468765 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-468765 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.732805137s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.75s)

                                                
                                    
x
+
TestPause/serial/Pause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-468765 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.60s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-468765 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-468765 --output=json --layout=cluster: exit status 2 (318.287984ms)

                                                
                                                
-- stdout --
	{"Name":"pause-468765","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-468765","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-468765 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.54s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-468765 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-468765 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-468765 --alsologtostderr -v=5: (2.119002303s)
--- PASS: TestPause/serial/DeletePaused (2.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-468765
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-468765: exit status 1 (16.394803ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-468765: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (51.948916049s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-271955 "pgrep -a kubelet"
I0920 17:46:30.104527    7542 config.go:182] Loaded profile config "auto-271955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-271955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-clxzs" [d71898ac-0904-4ea7-89c9-54659e01f3b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-clxzs" [d71898ac-0904-4ea7-89c9-54659e01f3b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004688476s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-271955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m14.037843801s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0920 17:47:41.227322    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m14.77675576s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nzm9p" [f367c810-fd41-42ff-afc1-baba73a6143b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005400287s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-271955 "pgrep -a kubelet"
I0920 17:48:25.100673    7542 config.go:182] Loaded profile config "kindnet-271955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-271955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4lgph" [409b2827-b9a9-4112-845b-c8397048eb0e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4lgph" [409b2827-b9a9-4112-845b-c8397048eb0e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.003847219s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-271955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-s7mlk" [5e620ff2-7d0c-40ac-b890-8a89194ffca0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.012337483s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-271955 "pgrep -a kubelet"
I0920 17:48:58.351172    7542 config.go:182] Loaded profile config "calico-271955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-271955 replace --force -f testdata/netcat-deployment.yaml
I0920 17:48:58.773567    7542 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6gc7q" [796cf287-4eaa-477b-900e-aff11c4bd10f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6gc7q" [796cf287-4eaa-477b-900e-aff11c4bd10f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.003647573s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m5.863870031s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-271955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (76.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m16.192331572s)
--- PASS: TestNetworkPlugins/group/false/Start (76.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-271955 "pgrep -a kubelet"
I0920 17:50:07.872932    7542 config.go:182] Loaded profile config "custom-flannel-271955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-271955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rtt2h" [2e1efefc-f778-4f98-8413-a6a04bd5f70e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rtt2h" [2e1efefc-f778-4f98-8413-a6a04bd5f70e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00483481s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-271955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m9.996956378s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-271955 "pgrep -a kubelet"
I0920 17:50:56.864509    7542 config.go:182] Loaded profile config "false-271955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-271955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5lwr4" [bb3252c0-ebe3-428f-bd95-e4c998948a13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5lwr4" [bb3252c0-ebe3-428f-bd95-e4c998948a13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.003930626s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-271955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0920 17:51:32.979171    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/auto-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:51:35.540920    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/auto-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:51:40.662963    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/auto-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:51:50.905153    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/auto-271955/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (58.726259753s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-271955 "pgrep -a kubelet"
I0920 17:51:51.980191    7542 config.go:182] Loaded profile config "enable-default-cni-271955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-271955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5hrzm" [bc24afc7-8748-4d9f-b006-1e5b07ac2766] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5hrzm" [bc24afc7-8748-4d9f-b006-1e5b07ac2766] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004015762s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-271955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m18.192095467s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9nd5p" [2bc48fbf-67dd-4ead-a60a-2ed1cfa04c7f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006558419s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-271955 "pgrep -a kubelet"
E0920 17:52:36.883298    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
I0920 17:52:36.905737    7542 config.go:182] Loaded profile config "flannel-271955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-271955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5k8nz" [781f140a-25dc-454d-9488-2297584e7a36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 17:52:41.224744    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-5k8nz" [781f140a-25dc-454d-9488-2297584e7a36] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004971135s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-271955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (82.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0920 17:53:18.673348    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:18.679792    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:18.691183    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:18.712592    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:18.754048    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:18.835483    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:18.997823    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:19.319143    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:19.962435    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:21.244201    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:23.806390    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:28.928467    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:39.169781    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-271955 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m22.401725846s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (82.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-271955 "pgrep -a kubelet"
I0920 17:53:48.142068    7542 config.go:182] Loaded profile config "bridge-271955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-271955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m5xrc" [010cf246-d3d3-4dc9-96da-ce5ff313f518] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 17:53:51.917567    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:51.923934    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:51.935257    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:51.956654    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:51.998775    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:52.080163    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:52.242426    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:52.564063    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:53.206057    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-m5xrc" [010cf246-d3d3-4dc9-96da-ce5ff313f518] Running
E0920 17:53:54.487925    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:53:57.049987    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004213064s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-271955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0920 17:53:59.651495    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (175.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-163626 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0920 17:54:20.711824    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:54:32.895326    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-163626 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m55.629388067s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (175.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-271955 "pgrep -a kubelet"
I0920 17:54:37.062682    7542 config.go:182] Loaded profile config "kubenet-271955": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-271955 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5s2d6" [bb140143-8a32-470a-99a9-e82cb249cb5a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 17:54:40.612829    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-5s2d6" [bb140143-8a32-470a-99a9-e82cb249cb5a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.003839535s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-271955 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-271955 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (81.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-113335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 17:55:13.395368    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/custom-flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:13.857063    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:18.517426    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/custom-flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:28.759447    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/custom-flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:43.776093    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:49.240762    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/custom-flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:57.170959    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:57.177393    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:57.188781    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:57.210184    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:57.251610    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:57.333760    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:57.495621    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:57.817738    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:58.459678    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:55:59.741241    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:02.302814    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:02.534510    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:07.425022    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:17.666749    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:30.202137    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/custom-flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:30.408797    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/auto-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-113335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m21.710014107s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (81.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-113335 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [326c7c37-23fb-4e85-8844-aad10a3a5168] Pending
helpers_test.go:344: "busybox" [326c7c37-23fb-4e85-8844-aad10a3a5168] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [326c7c37-23fb-4e85-8844-aad10a3a5168] Running
E0920 17:56:35.779242    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:38.148075    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004697202s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-113335 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-113335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-113335 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-113335 --alsologtostderr -v=3
E0920 17:56:52.305908    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:52.312275    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:52.323663    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:52.345109    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:52.386613    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:52.468128    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:52.629849    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-113335 --alsologtostderr -v=3: (10.932367523s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-113335 -n no-preload-113335
E0920 17:56:52.951679    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-113335 -n no-preload-113335: exit status 7 (66.834095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-113335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-113335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 17:56:53.593455    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:54.875542    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:57.437872    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:56:58.112842    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/auto-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:02.559899    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:12.802019    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-113335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.661852268s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-113335 -n no-preload-113335
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-163626 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2fc4132f-9f7f-4442-b318-b797ff7dd14c] Pending
helpers_test.go:344: "busybox" [2fc4132f-9f7f-4442-b318-b797ff7dd14c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 17:57:19.110338    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [2fc4132f-9f7f-4442-b318-b797ff7dd14c] Running
E0920 17:57:24.292317    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004014077s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-163626 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-163626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-163626 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.014786375s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-163626 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-163626 --alsologtostderr -v=3
E0920 17:57:30.499086    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:30.505561    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:30.516938    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:30.538417    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:30.579836    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:30.661263    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:30.822922    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:31.144671    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:31.786833    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:33.069175    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:33.284008    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:35.630880    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:36.883777    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-163626 --alsologtostderr -v=3: (11.071564663s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-163626 -n old-k8s-version-163626
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-163626 -n old-k8s-version-163626: exit status 7 (83.579842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-163626 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (137.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-163626 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0920 17:57:40.752271    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:41.225479    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:50.994157    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:57:52.123853    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/custom-flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:11.475586    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:14.246124    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:18.673720    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:41.032886    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:46.376545    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:48.416192    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:48.422613    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:48.434114    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:48.456802    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:48.498276    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:48.579712    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:48.741276    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:49.062959    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:49.704908    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:50.987068    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:51.917434    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:52.437540    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:53.548428    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:58:58.670573    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:08.912515    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:19.620662    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:20.711938    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:29.394744    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:36.168242    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:37.456248    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:37.462817    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:37.474335    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:37.495804    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:37.537171    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:37.618718    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:37.780231    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:38.101737    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:38.744069    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:40.041336    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:42.603558    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 17:59:47.725162    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-163626 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m17.042162099s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-163626 -n old-k8s-version-163626
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (137.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-n9nf8" [af0c8f35-8bf3-4de0-8756-b3ea568201a2] Running
E0920 17:59:57.966520    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.030612435s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-n9nf8" [af0c8f35-8bf3-4de0-8756-b3ea568201a2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003534301s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-163626 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-163626 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-163626 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-163626 -n old-k8s-version-163626
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-163626 -n old-k8s-version-163626: exit status 2 (333.32105ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-163626 -n old-k8s-version-163626
E0920 18:00:08.217352    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/custom-flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-163626 -n old-k8s-version-163626: exit status 2 (323.005585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-163626 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-163626 -n old-k8s-version-163626
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-163626 -n old-k8s-version-163626
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-557242 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 18:00:14.358796    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:00:18.447773    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:00:35.965194    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/custom-flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:00:57.170882    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:00:59.409089    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-557242 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (47.236499993s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-557242 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2e7c395a-520f-4537-9fce-f6cc26e5c304] Pending
helpers_test.go:344: "busybox" [2e7c395a-520f-4537-9fce-f6cc26e5c304] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2e7c395a-520f-4537-9fce-f6cc26e5c304] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005303295s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-557242 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-557242 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-557242 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.379085317s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-557242 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-557242 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-557242 --alsologtostderr -v=3: (11.140111191s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dnntw" [2fc9bb80-188d-4964-b3bc-29fe08fce30b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004012579s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-557242 -n embed-certs-557242
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-557242 -n embed-certs-557242: exit status 7 (71.961821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-557242 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (270.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-557242 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 18:01:24.874686    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-557242 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m30.344101799s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-557242 -n embed-certs-557242
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (270.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dnntw" [2fc9bb80-188d-4964-b3bc-29fe08fce30b] Running
E0920 18:01:30.408808    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/auto-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00439623s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-113335 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0920 18:01:32.278688    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-113335 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-113335 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-113335 --alsologtostderr -v=1: (1.043240121s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-113335 -n no-preload-113335
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-113335 -n no-preload-113335: exit status 2 (491.884726ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-113335 -n no-preload-113335
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-113335 -n no-preload-113335: exit status 2 (482.428015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-113335 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-113335 -n no-preload-113335
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-113335 -n no-preload-113335
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-908310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 18:01:52.305368    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:16.352481    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:16.358838    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:16.370198    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:16.391618    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:16.432984    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:16.514416    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:16.676158    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:16.998023    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:17.639616    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:18.921904    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:20.010053    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:21.330809    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:21.483266    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:26.605623    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:30.498623    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-908310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (55.578001861s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-908310 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7bd921cb-2c6c-4517-aa8c-bee51cb303e4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 18:02:36.848169    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:02:36.883810    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [7bd921cb-2c6c-4517-aa8c-bee51cb303e4] Running
E0920 18:02:41.224754    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/functional-108853/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.006156237s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-908310 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-908310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-908310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.027324139s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-908310 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-908310 --alsologtostderr -v=3
E0920 18:02:57.330261    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-908310 --alsologtostderr -v=3: (11.001378879s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-908310 -n default-k8s-diff-port-908310
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-908310 -n default-k8s-diff-port-908310: exit status 7 (72.506393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-908310 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-908310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 18:02:58.201009    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:03:18.672964    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kindnet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:03:38.292115    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:03:48.416148    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:03:51.917397    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/calico-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:04:16.120002    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/bridge-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:04:20.711252    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/skaffold-349114/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:04:37.456401    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:05:00.225622    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:05:05.172777    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/kubenet-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:05:08.217775    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/custom-flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-908310 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m28.29968513s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-908310 -n default-k8s-diff-port-908310
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zvj5t" [a9876eb7-9897-4d06-8851-e34d1175afcd] Running
E0920 18:05:57.171031    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/false-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004349701s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zvj5t" [a9876eb7-9897-4d06-8851-e34d1175afcd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003397863s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-557242 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-557242 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-557242 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-557242 -n embed-certs-557242
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-557242 -n embed-certs-557242: exit status 2 (335.397573ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-557242 -n embed-certs-557242
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-557242 -n embed-certs-557242: exit status 2 (334.126611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-557242 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-557242 -n embed-certs-557242
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-557242 -n embed-certs-557242
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-305738 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 18:06:30.409190    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/auto-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:32.747169    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:32.753840    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:32.765179    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:32.786537    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:32.827868    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:32.909255    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:33.070688    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:33.392299    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:34.034431    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:35.316281    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:37.878378    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:42.999718    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-305738 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (36.761343052s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-305738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-305738 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.221457193s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-305738 --alsologtostderr -v=3
E0920 18:06:52.305837    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/enable-default-cni-271955/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:06:53.241794    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-305738 --alsologtostderr -v=3: (9.633191495s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-305738 -n newest-cni-305738
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-305738 -n newest-cni-305738: exit status 7 (71.183744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-305738 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-305738 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 18:07:13.724202    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/no-preload-113335/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:07:16.351589    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/old-k8s-version-163626/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-305738 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (19.579232354s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-305738 -n newest-cni-305738
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-305738 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-305738 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-305738 -n newest-cni-305738
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-305738 -n newest-cni-305738: exit status 2 (442.45276ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-305738 -n newest-cni-305738
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-305738 -n newest-cni-305738: exit status 2 (380.430229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-305738 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-305738 -n newest-cni-305738
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-305738 -n newest-cni-305738
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6fmtj" [5e48fb93-fca8-4e35-9709-0fff95b4924e] Running
E0920 18:07:30.498822    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/flannel-271955/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003703914s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6fmtj" [5e48fb93-fca8-4e35-9709-0fff95b4924e] Running
E0920 18:07:36.883526    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/addons-877987/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00474327s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-908310 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-908310 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-908310 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-908310 -n default-k8s-diff-port-908310
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-908310 -n default-k8s-diff-port-908310: exit status 2 (310.887162ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-908310 -n default-k8s-diff-port-908310
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-908310 -n default-k8s-diff-port-908310: exit status 2 (305.329027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-908310 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-908310 -n default-k8s-diff-port-908310
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-908310 -n default-k8s-diff-port-908310
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.75s)

                                                
                                    

Test skip (23/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-108524 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-108524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-108524
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-271955 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-271955" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-2235/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 17:35:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-290502
contexts:
- context:
cluster: offline-docker-290502
extensions:
- extension:
last-update: Fri, 20 Sep 2024 17:35:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: offline-docker-290502
name: offline-docker-290502
current-context: offline-docker-290502
kind: Config
preferences: {}
users:
- name: offline-docker-290502
user:
client-certificate: /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/offline-docker-290502/client.crt
client-key: /home/jenkins/minikube-integration/19672-2235/.minikube/profiles/offline-docker-290502/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-271955

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-271955" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271955"

                                                
                                                
----------------------- debugLogs end: cilium-271955 [took: 4.824699729s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-271955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-271955
--- SKIP: TestNetworkPlugins/group/cilium (5.02s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-380843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-380843
--- SKIP: TestStartStop/group/disable-driver-mounts (0.33s)

                                                
                                    
Copied to clipboard